Seismic facies analysis based on self-organizing map and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian
2015-01-01
Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Signal-to-noise ratio application to seismic marker analysis and fracture detection
NASA Astrophysics Data System (ADS)
Xu, Hui-Qun; Gui, Zhi-Xian
2014-03-01
Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.
LANL seismic screening method for existing buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, S.L.; Feller, K.C.; Fritz de la Orta, G.O.
1997-01-01
The purpose of the Los Alamos National Laboratory (LANL) Seismic Screening Method is to provide a comprehensive, rational, and inexpensive method for evaluating the relative seismic integrity of a large building inventory using substantial life-safety as the minimum goal. The substantial life-safety goal is deemed to be satisfied if the extent of structural damage or nonstructural component damage does not pose a significant risk to human life. The screening is limited to Performance Category (PC) -0, -1, and -2 buildings and structures. Because of their higher performance objectives, PC-3 and PC-4 buildings automatically fail the LANL Seismic Screening Method andmore » will be subject to a more detailed seismic analysis. The Laboratory has also designated that PC-0, PC-1, and PC-2 unreinforced masonry bearing wall and masonry infill shear wall buildings fail the LANL Seismic Screening Method because of their historically poor seismic performance or complex behavior. These building types are also recommended for a more detailed seismic analysis. The results of the LANL Seismic Screening Method are expressed in terms of separate scores for potential configuration or physical hazards (Phase One) and calculated capacity/demand ratios (Phase Two). This two-phase method allows the user to quickly identify buildings that have adequate seismic characteristics and structural capacity and screen them out from further evaluation. The resulting scores also provide a ranking of those buildings found to be inadequate. Thus, buildings not passing the screening can be rationally prioritized for further evaluation. For the purpose of complying with Executive Order 12941, the buildings failing the LANL Seismic Screening Method are deemed to have seismic deficiencies, and cost estimates for mitigation must be prepared. Mitigation techniques and cost-estimate guidelines are not included in the LANL Seismic Screening Method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, F.W.
1994-03-28
This bibliography is divided into the following four sections: Seismicity of Hawaii and Kilauea Volcano; Occurrence, locations and accelerations from large historical Hawaiian earthquakes; Seismic hazards of Hawaii; and Methods of seismic hazard analysis. It contains 62 references, most of which are accompanied by short abstracts.
Research on the spatial analysis method of seismic hazard for island
NASA Astrophysics Data System (ADS)
Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying
2017-05-01
Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.
Seismic Hazard Analysis — Quo vadis?
NASA Astrophysics Data System (ADS)
Klügel, Jens-Uwe
2008-05-01
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.
Impacts of potential seismic landslides on lifeline corridors.
DOT National Transportation Integrated Search
2015-02-01
This report presents a fully probabilistic method for regional seismically induced landslide hazard analysis and : mapping. The method considers the most current predictions for strong ground motions and seismic sources : through use of the U.S.G.S. ...
Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory
NASA Astrophysics Data System (ADS)
Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi
2018-03-01
With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.
MASW on the standard seismic prospective scale using full spread recording
NASA Astrophysics Data System (ADS)
Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej
2015-04-01
The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin
2017-12-01
Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
NASA Astrophysics Data System (ADS)
Luo, D.; Cai, F.
2017-12-01
Small-scale and high-resolution marine sparker multi-channel seismic surveys using large energy sparkers are characterized by a high dominant frequency of the seismic source, wide bandwidth, and a high resolution. The technology with a high-resolution and high-detection precision was designed to improve the imaging quality of shallow sedimentary. In the study, a 20KJ sparker and 24-channel streamer cable with a 6.25m group interval were used as a seismic source and receiver system, respectively. Key factors for seismic imaging of gas hydrate are enhancement of S/N ratio, amplitude compensation and detailed velocity analysis. However, the data in this study has some characteristics below: 1. Small maximum offsets are adverse to velocity analysis and multiple attenuation. 2. Lack of low frequency information, that is, information less than 100Hz are invisible. 3. Low S/N ratio since less coverage times (only 12 times). These characteristics make it difficult to reach the targets of seismic imaging. In the study, the target processing methods are used to improve the seismic imaging quality of gas hydrate. First, some technologies of noise suppression are combined used in pre-stack seismic data to suppression of seismic noise and improve the S/N ratio. These technologies including a spectrum sharing noise elimination method, median filtering and exogenous interference suppression method. Second, the combined method of three technologies including SRME, τ-p deconvolution and high precision Radon transformation is used to remove multiples. Third, accurate velocity field are used in amplitude energy compensation to highlight the Bottom Simulating Reflector (short for BSR, the indicator of gas hydrates) and gas migration pathways (such as gas chimneys, hot spots et al.). Fourth, fine velocity analysis technology are used to improve accuracy of velocity analysis. Fifth, pre-stack deconvolution processing technology is used to compensate for low frequency energy and suppress of ghost, thus formation reflection characteristics are highlighted. The result shows that the small-scale and high resolution marine sparker multi-channel seismic surveys are very effective in improving the resolution and quality of gas hydrate imaging than the conventional seismic acquisition technology.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
Seismic analysis for translational failure of landfills with retaining walls.
Feng, Shi-Jin; Gao, Li-Ya
2010-11-01
In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.
Military applications and examples of near-surface seismic surface wave methods (Invited)
NASA Astrophysics Data System (ADS)
sloan, S.; Stevens, R.
2013-12-01
Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.
NASA Astrophysics Data System (ADS)
Che, Il-Young; Jeon, Jeong-Soo
2010-05-01
Korea Institute of Geoscience and Mineral Resources (KIGAM) operates an infrasound network consisting of seven seismo-acoustic arrays in South Korea. Development of the arrays began in 1999, partially in collaboration with Southern Methodist University, with the goal of detecting distant infrasound signals from natural and anthropogenic phenomena in and around the Korean Peninsula. The main operational purpose of this network is to discriminate man-made seismic events from seismicity including thousands of seismic events per year in the region. The man-made seismic events are major cause of error in estimating the natural seismicity, especially where the seismic activity is weak or moderate such as in the Korean Peninsula. In order to discriminate the man-made explosions from earthquakes, we have applied the seismo-acoustic analysis associating seismic and infrasonic signals generated from surface explosion. The observations of infrasound at multiple arrays made it possible to discriminate surface explosion, because small or moderate size earthquake is not sufficient to generate infrasound. Till now we have annually discriminated hundreds of seismic events in seismological catalog as surface explosions by the seismo-acoustic analysis. Besides of the surface explosions, the network also detected infrasound signals from other sources, such as bolide, typhoons, rocket launches, and underground nuclear test occurred in and around the Korean Peninsula. In this study, ten years of seismo-acoustic data are reviewed with recent infrasonic detection algorithm and association method that finally linked to the seismic monitoring system of the KIGAM to increase the detection rate of surface explosions. We present the long-term results of seismo-acoustic analysis, the detection capability of the multiple arrays, and implications for seismic source location. Since the seismo-acoustic analysis is proved as a definite method to discriminate surface explosion, the analysis will be continuously used for estimating natural seismicity and understanding infrasonic sources.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Design and development of digital seismic amplifier recorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsidar, Siti Alaa; Afuar, Waldy; Handayani, Gunawan, E-mail: gunawanhandayani@gmail.com
2015-04-16
A digital seismic recording is a recording technique of seismic data in digital systems. This method is more convenient because it is more accurate than other methods of seismic recorders. To improve the quality of the results of seismic measurements, the signal needs to be amplified to obtain better subsurface images. The purpose of this study is to improve the accuracy of measurement by amplifying the input signal. We use seismic sensors/geophones with a frequency of 4.5 Hz. The signal is amplified by means of 12 units of non-inverting amplifier. The non-inverting amplifier using IC 741 with the resistor values 1KΩmore » and 1MΩ. The amplification results were 1,000 times. The results of signal amplification converted into digital by using the Analog Digital Converter (ADC). Quantitative analysis in this study was performed using the software Lab VIEW 8.6. The Lab VIEW 8.6 program was used to control the ADC. The results of qualitative analysis showed that the seismic conditioning can produce a large output, so that the data obtained is better than conventional data. This application can be used for geophysical methods that have low input voltage such as microtremor application.« less
High lateral resolution exploration using surface waves from noise records
NASA Astrophysics Data System (ADS)
Chávez-García, Francisco José Yokoi, Toshiaki
2016-04-01
Determination of the shear-wave velocity structure at shallow depths is a constant necessity in engineering or environmental projects. Given the sensitivity of Rayleigh waves to shear-wave velocity, subsoil structure exploration using surface waves is frequently used. Methods such as the spectral analysis of surface waves (SASW) or multi-channel analysis of surface waves (MASW) determine phase velocity dispersion from surface waves generated by an active source recorded on a line of geophones. Using MASW, it is important that the receiver array be as long as possible to increase the precision at low frequencies. However, this implies that possible lateral variations are discarded. Hayashi and Suzuki (2004) proposed a different way of stacking shot gathers to increase lateral resolution. They combined strategies used in MASW with the common mid-point (CMP) summation currently used in reflection seismology. In their common mid-point with cross-correlation method (CMPCC), they cross-correlate traces sharing CMP locations before determining phase velocity dispersion. Another recent approach to subsoil structure exploration is based on seismic interferometry. It has been shown that cross-correlation of a diffuse field, such as seismic noise, allows the estimation of the Green's Function between two receivers. Thus, a virtual-source seismic section may be constructed from the cross-correlation of seismic noise records obtained in a line of receivers. In this paper, we use the seismic interferometry method to process seismic noise records obtained in seismic refraction lines of 24 geophones, and analyse the results using CMPCC to increase the lateral resolution of the results. Cross-correlation of the noise records allows reconstructing seismic sections with virtual sources at each receiver location. The Rayleigh wave component of the Green's Functions is obtained with a high signal-to-noise ratio. Using CMPCC analysis of the virtual-source seismic lines, we are able to identify lateral variations of phase velocity inside the seismic line, and increase the lateral resolution compared with results of conventional analysis.
NASA Astrophysics Data System (ADS)
Huerta, F. V.; Granados, I.; Aguirre, J.; Carrera, R. Á.
2017-12-01
Nowadays, in hydrocarbon industry, there is a need to optimize and reduce exploration costs in the different types of reservoirs, motivating the community specialized in the search and development of alternative exploration geophysical methods. This study show the reflection response obtained from a shale gas / oil deposit through the method of seismic interferometry of ambient vibrations in combination with Wavelet analysis and conventional seismic reflection techniques (CMP & NMO). The method is to generate seismic responses from virtual sources through the process of cross-correlation of records of Ambient Seismic Vibrations (ASV), collected in different receivers. The seismic response obtained is interpreted as the response that would be measured in one of the receivers considering a virtual source in the other. The acquisition of ASV records was performed in northern of Mexico through semi-rectangular arrays of multi-component geophones with instrumental response of 10 Hz. The in-line distance between geophones was 40 m while in cross-line was 280 m, the sampling used during the data collection was 2 ms and the total duration of the records was 6 hours. The results show the reflection response of two lines in the in-line direction and two in the cross-line direction for which the continuity of coherent events have been identified and interpreted as reflectors. There is certainty that the events identified correspond to reflections because the time-frequency analysis performed with the Wavelet Transform has allowed to identify the frequency band in which there are body waves. On the other hand, the CMP and NMO techniques have allowed to emphasize and correct the reflection response obtained during the correlation processes in the frequency band of interest. The results of the processing and analysis of ASV records through the seismic interferometry method have allowed us to see interesting results in light of the cross-correlation process in combination with the Wavelet analysis and conventional seismic reflection techniques. Therefore it was possible to recover the seismic response on each analyzed source-receiver pair, allowing us to obtain the reflection response of each analyzed seismic line.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
Seismic performance evaluation of RC frame-shear wall structures using nonlinear analysis methods
NASA Astrophysics Data System (ADS)
Shi, Jialiang; Wang, Qiuwei
To further understand the seismic performance of reinforced concrete (RC) frame-shear wall structures, a 1/8 model structure is scaled from a main factory structure with seven stories and seven bays. The model with four-stories and two-bays was pseudo-dynamically tested under six earthquake actions whose peak ground accelerations (PGA) vary from 50gal to 400gal. The damage process and failure patterns were investigated. Furthermore, nonlinear dynamic analysis (NDA) and capacity spectrum method (CSM) were adopted to evaluate the seismic behavior of the model structure. The top displacement curve, story drift curve and distribution of hinges were obtained and discussed. It is shown that the model structure had the characteristics of beam-hinge failure mechanism. The two methods can be used to evaluate the seismic behavior of RC frame-shear wall structures well. What’s more, the NDA can be somewhat replaced by CSM for the seismic performance evaluation of RC structures.
NASA Astrophysics Data System (ADS)
Köktan, Utku; Demir, Gökhan; Kerem Ertek, M.
2017-04-01
The earthquake behavior of retaining walls is commonly calculated with pseudo static approaches based on Mononobe-Okabe method. The seismic ground pressure acting on the retaining wall by the Mononobe-Okabe method does not give a definite idea of the distribution of the seismic ground pressure because it is obtained by balancing the forces acting on the active wedge behind the wall. With this method, wave propagation effects and soil-structure interaction are neglected. The purpose of this study is to examine the earthquake behavior of a retaining wall taking into account the soil-structure interaction. For this purpose, time history seismic analysis of the soil-structure interaction system using finite element method has been carried out considering 3 different soil conditions. Seismic analysis of the soil-structure model was performed according to the earthquake record of "1971, San Fernando Pacoima Dam, 196 degree" existing in the library of MIDAS GTS NX software. The results obtained from the analyses show that the soil-structure interaction is very important for the seismic design of a retaining wall. Keywords: Soil-structure interaction, Finite element model, Retaining wall
Application of USNRC NUREG/CR-6661 and draft DG-1108 to evolutionary and advanced reactor designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang 'Apollo', Chen
2006-07-01
For the seismic design of evolutionary and advanced nuclear reactor power plants, there are definite financial advantages in the application of USNRC NUREG/CR-6661 and draft Regulatory Guide DG-1108. NUREG/CR-6661, 'Benchmark Program for the Evaluation of Methods to Analyze Non-Classically Damped Coupled Systems', was by Brookhaven National Laboratory (BNL) for the USNRC, and Draft Regulatory Guide DG-1108 is the proposed revision to the current Regulatory Guide (RG) 1.92, Revision 1, 'Combining Modal Responses and Spatial Components in Seismic Response Analysis'. The draft Regulatory Guide DG-1108 is available at http://members.cox.net/apolloconsulting, which also provides a link to the USNRC ADAMS site to searchmore » for NUREG/CR-6661 in text file or image file. The draft Regulatory Guide DG-1108 removes unnecessary conservatism in the modal combinations for closely spaced modes in seismic response spectrum analysis. Its application will be very helpful in coupled seismic analysis for structures and heavy equipment to reduce seismic responses and in piping system seismic design. In the NUREG/CR-6661 benchmark program, which investigated coupled seismic analysis of structures and equipment or piping systems with different damping values, three of the four participants applied the complex mode solution method to handle different damping values for structures, equipment, and piping systems. The fourth participant applied the classical normal mode method with equivalent weighted damping values to handle differences in structural, equipment, and piping system damping values. Coupled analysis will reduce the equipment responses when equipment, or piping system and structure are in or close to resonance. However, this reduction in responses occurs only if the realistic DG-1108 modal response combination method is applied, because closely spaced modes will be produced when structure and equipment or piping systems are in or close to resonance. Otherwise, the conservatism in the current Regulatory Guide 1.92, Revision 1, will overshadow the advantage of coupled analysis. All four participants applied the realistic modal combination method of DG-1108. Consequently, more realistic and reduced responses were obtained. (authors)« less
Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry
NASA Astrophysics Data System (ADS)
Nishitsuji, Yohei; Rowe, C. A.; Wapenaar, Kees; Draganov, Deyan
2016-04-01
The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection responses (and thus images) beneath the Apollo stations on the nearside of the Moon from virtual sources colocated with the stations. This method is called deep-moonquake seismic interferometry (DMSI). Our results show a laterally coherent acoustic boundary around 50 km depth beneath all four Apollo stations. We interpret this boundary as the lunar seismic Moho. This depth agrees with Japan Aerospace Exploration Agency's (JAXA) SELenological and Engineering Explorer (SELENE) result and previous travel time analysis at the Apollo 12/14 sites. The deeper part of the image we obtain from DMSI shows laterally incoherent structures. Such lateral inhomogeneity we interpret as representing a zone characterized by strong scattering and constant apparent seismic velocity at our resolution scale (0.2-2.0 Hz).
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.
Seismic Fragility Analysis of a Condensate Storage Tank with Age-Related Degradations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Braverman, J.; Hofmayer, C
2011-04-01
The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structuresmore » and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. This report describes the research effort performed by BNL for the Year 4 scope of work. This report was developed as an update to the Year 3 report by incorporating a major supplement to the Year 3 fragility analysis. In the Year 4 research scope, an additional study was carried out to consider an additional degradation scenario, in which the three basic degradation scenarios, i.e., degraded tank shell, degraded anchor bolts, and cracked anchorage concrete, are combined in a non-perfect correlation manner. A representative operational water level is used for this effort. Building on the same CDFM procedure implemented for the Year 3 Tasks, a simulation method was applied using optimum Latin Hypercube samples to characterize the deterioration behavior of the fragility capacity as a function of age-related degradations. The results are summarized in Section 5 and Appendices G through I.« less
The Utility of the Extended Images in Ambient Seismic Wavefield Migration
NASA Astrophysics Data System (ADS)
Girard, A. J.; Shragge, J. C.
2015-12-01
Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.
Improving resolution of crosswell seismic section based on time-frequency analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, H.; Li, Y.
1994-12-31
According to signal theory, to improve resolution of seismic section is to extend high-frequency band of seismic signal. In cross-well section, sonic log can be regarded as a reliable source providing high-frequency information to the trace near the borehole. In such case, what to do is to introduce this high-frequency information into the whole section. However, neither traditional deconvolution algorithms nor some new inversion methods such as BCI (Broad Constraint Inversion) are satisfied because of high-frequency noise and nonuniqueness of inversion results respectively. To overcome their disadvantages, this paper presents a new algorithm based on Time-Frequency Analysis (TFA) technology whichmore » has been increasingly received much attention as an useful signal analysis too. Practical applications show that the new method is a stable scheme to improve resolution of cross-well seismic section greatly without decreasing Signal to Noise Ratio (SNR).« less
NASA Astrophysics Data System (ADS)
Anggraeni, Novia Antika
2015-04-01
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration ofmore » the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.« less
NASA Astrophysics Data System (ADS)
Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio; Contrafatto, Danilo; Galluzzo, Danilo; Rapisarda, Salvatore
2016-04-01
In volcanic environment the propagation of seismic signals through the shallowest layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Therefore tracing a seismic ray from the recording site back to the source is a complex matter, with obvious implications for the source location. For this reason the knowledge of the shallow velocity structure may improve the location of shallow volcano-tectonic earthquakes and volcanic tremor, thus contributing to improve the monitoring of volcanic activity. This work focuses on the analysis of seismic noise and volcanic tremor recorded in 2014 by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna. Several methods permit a reliable estimation of the shear wave velocity in the shallowest layers through the analysis of stationary random wavefield like the seismic noise. We have applied the single station HVSR method and SPAC array method to seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. We also applied the Beam Forming array method in the 0.5 Hz - 4 Hz frequency range to both seismic noise and volcanic tremor. The apparent velocity of coherent tremor signals fits quite well the dispersion curve estimated from the analysis of seismic noise, thus giving a further constrain on the estimated velocity model. Moreover, taking advantage of a borehole station installed at 130 m depth in the same area of the array, we obtained a direct estimate of the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave visible in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, in good agreement with the shear wave velocity found from the analysis of seismic noise. To better constrain the inversion we used the HVSR computed at each array station, which also give a lateral extension to the final 3D velocity model. The obtained results indicate that site effects in the investigate area are quite homogeneous among the array stations.
Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping
2013-01-01
Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively.
NASA Astrophysics Data System (ADS)
Setiawan, Jody; Nakazawa, Shoji
2017-10-01
This paper discusses about comparison of seismic response behaviors, seismic performance and seismic loss function of a conventional special moment frame steel structure (SMF) and a special moment frame steel structure with base isolation (BI-SMF). The validation of the proposed simplified estimation method of the maximum deformation of the base isolation system by using the equivalent linearization method and the validation of the design shear force of the superstructure are investigated from results of the nonlinear dynamic response analysis. In recent years, the constructions of steel office buildings with seismic isolation system are proceeding even in Indonesia where the risk of earthquakes is high. Although the design code for the seismic isolation structure has been proposed, there is no actual construction example for special moment frame steel structure with base isolation. Therefore, in this research, the SMF and BI-SMF buildings are designed by Indonesian Building Code which are assumed to be built at Padang City in Indonesia. The material of base isolation system is high damping rubber bearing. Dynamic eigenvalue analysis and nonlinear dynamic response analysis are carried out to show the dynamic characteristics and seismic performance. In addition, the seismic loss function is obtained from damage state probability and repair cost. For the response analysis, simulated ground accelerations, which have the phases of recorded seismic waves (El Centro NS, El Centro EW, Kobe NS and Kobe EW), adapted to the response spectrum prescribed by the Indonesian design code, that has, are used.
Slope Stability Analysis In Seismic Areas Of The Northern Apennines (Italy)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Presti, D.; Fontana, T.; Marchetti, D.
2008-07-08
Several research works have been published on the slope stability in the northern Tuscany (central Italy) and particularly in the seismic areas of Garfagnana and Lunigiana (Lucca and Massa-Carrara districts), aimed at analysing the slope stability under static and dynamic conditions and mapping the landslide hazard. In addition, in situ and laboratory investigations are available for the study area, thanks to the activities undertaken by the Tuscany Seismic Survey. Based on such a huge information the co-seismic stability of few ideal slope profiles have been analysed by means of Limit equilibrium method LEM - (pseudo-static) and Newmark sliding block analysismore » (pseudo-dynamic). The analysis--results gave indications about the most appropriate seismic coefficient to be used in pseudo-static analysis after establishing allowable permanent displacement. Such indications are commented in the light of the Italian and European prescriptions for seismic stability analysis with pseudo-static approach. The stability conditions, obtained from the previous analyses, could be used to define microzonation criteria for the study area.« less
Rippability Assessment of Weathered Sedimentary Rock Mass using Seismic Refraction Methods
NASA Astrophysics Data System (ADS)
Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.
2018-04-01
Rippability or ease of excavation in sedimentary rocks is a significant aspect of the preliminary work of any civil engineering project. Rippability assessment was performed in this study to select an available ripping machine to rip off earth materials using the seismic velocity chart provided by Caterpillar. The research area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. The research was aimed at obtaining seismic velocity, P-wave (Vp) using a seismic refraction method to produce a 2D tomography model. A 2D seismic model was used to delineate the layers into the velocity profile. The conventional geotechnical method of using a borehole was integrated with the seismic velocity method to provide appropriate correlation. The correlated data can be used to categorize machineries for excavation activities based on the available systematic analysis procedure to predict rock rippability. The seismic velocity profile obtained was used to interpret rock layers within the ranges labelled as rippable, marginal, and non-rippable. Based on the seismic velocity method the site can be classified into loose sand stone to moderately weathered rock. Laboratory test results shows that the site’s rock material falls between low strength and high strength. Results suggest that Caterpillar’s smallest ripper, namely, D8R, can successfully excavate materials based on the test results integration from seismic velocity method and laboratory test.
A Novel Approach to Constrain Near-Surface Seismic Wave Speed Based on Polarization Analysis
NASA Astrophysics Data System (ADS)
Park, S.; Ishii, M.
2016-12-01
Understanding the seismic responses of cities around the world is essential for the risk assessment of earthquake hazards. One of the important parameters is the elastic structure of the sites, in particular, near-surface seismic wave speed, that influences the level of ground shaking. Many methods have been developed to constrain the elastic structure of the populated sites or urban basins, and here, we introduce a new technique based on analyzing the polarization content or the three-dimensional particle motion of seismic phases arriving at the sites. Polarization analysis of three-component seismic data was widely used up to about two decades ago, to detect signals and identify different types of seismic arrivals. Today, we have good understanding of the expected polarization direction and ray parameter for seismic wave arrivals that are calculated based on a reference seismic model. The polarization of a given phase is also strongly sensitive to the elastic wave speed immediately beneath the station. This allows us to compare the observed and predicted polarization directions of incoming body waves and infer the near-surface wave speed. This approach is applied to High-Sensitivity Seismograph Network in Japan, where we benchmark the results against the well-log data that are available at most stations. There is a good agreement between our estimates of seismic wave speeds and those from well logs, confirming the efficacy of the new method. In most urban environments, where well logging is not a practical option for measuring the seismic wave speeds, this method can provide a reliable, non-invasive, and computationally inexpensive estimate of near-surface elastic properties.
Multi-Phenomenological Analysis of the 12 August 2015 Tianjin, China Chemical Explosion
NASA Astrophysics Data System (ADS)
Pasyanos, M.; Kim, K.; Park, J.; Stump, B. W.; Hayward, C.; Che, I. Y.; Zhao, L.; Myers, S. C.
2016-12-01
We perform a multi-phenomenological analysis of the massive near-surface chemical explosions that occurred in Tianjin, China on 12 August 2015. A recent assessment of these events was performed by Zhao et al. (2016) using local (< 100 km) seismic data. This study considers a regional assessment of the same sequence in the absence of having any local data. We provide additional insight by combining regional seismic analysis with the use of infrasound signals and an assessment of the event crater. Event locations using infrasound signals recorded at Korean and IMS arrays are estimated based on the Bayesian Infrasonic Source Location (BISL) method (Modrak et al., 2010), and improved with azimuthal corrections using a raytracing (Blom and Waxler, 2012) and the Ground-to-Space (G2S) atmospheric models (Drob et al., 2003). The location information provided from the infrasound signals is then merged with the regional seismic arrivals to produce a joint event location. The yields of the events are estimated from seismic and infrasonic observations. Seismic waveform envelope method (Pasyanos et al., 2012) including the free surface effect (Pasyanos and Ford, 2015) is applied to regional seismic signals. Waveform inversion method (Kim and Rodgers, 2016) is used for infrasound signals. A combination of the seismic and acoustic signals can provide insights on the energy partitioning and break the tradeoffs between the yield and the depth/height of explosions, resulting in a more robust estimation of event yield. The yield information from the different phenomenologies are combined through the use of likelihood functions.
Numerical simulation of bubble plumes and an analysis of their seismic attributes
NASA Astrophysics Data System (ADS)
Li, Canping; Gou, Limin; You, Jiachun
2017-04-01
To study the bubble plume's seismic response characteristics, the model of a plume water body has been built in this article using the bubble-contained medium acoustic velocity model and the stochastic medium theory based on an analysis of both the acoustic characteristics of a bubble-contained water body and the actual features of a plume. The finite difference method is used for forward modelling, and the single-shot seismic record exhibits the characteristics of a scattered wave field generated by a plume. A meaningful conclusion is obtained by extracting seismic attributes from the pre-stack shot gather record of a plume. The values of the amplitude-related seismic attributes increase greatly as the bubble content goes up, and changes in bubble radius will not cause seismic attributes to change, which is primarily observed because the bubble content has a strong impact on the plume's acoustic velocity, while the bubble radius has a weak impact on the acoustic velocity. The above conclusion provides a theoretical reference for identifying hydrate plumes using seismic methods and contributes to further study on hydrate decomposition and migration, as well as on distribution of the methane bubble in seawater.
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
Ivanov, Julian M.; Johnson, Carole D.; Lane, John W.; Miller, Richard D.; Clemens, Drew
2009-01-01
A limited seismic investigation of Ball Mountain Dam, an earthen dam near Jamaica, Vermont, was conducted using multiple seismic methods including multi‐channel analysis of surface waves (MASW), refraction tomography, and vertical seismic profiling (VSP). The refraction and MASW data were efficiently collected in one survey using a towed land streamer containing vertical‐displacement geophones and two seismic sources, a 9‐kg hammer at the beginning of the spread and a 40‐kg accelerated weight drop one spread length from the geophones, to obtain near‐ and far‐offset data sets. The quality of the seismic data for the purposes of both refraction and MASW analyses was good for near offsets, decreasing in quality at farther offsets, thus limiting the depth of investigation to about 12 m. Refraction tomography and MASW analyses provided 2D compressional (Vp) and shear‐wave (Vs) velocity sections along the dam crest and access road, which are consistent with the corresponding VSP seismic velocity estimates from nearby wells. The velocity sections helped identify zonal variations in both Vp and Vs (rigidity) properties, indicative of material heterogeneity or dynamic processes (e.g. differential settlement) at specific areas of the dam. The results indicate that refraction tomography and MASW methods are tools with significant potential for economical, non‐invasive characterization of construction materials at earthen dam sites.
Gas Reservoir Identification Basing on Deep Learning of Seismic-print Characteristics
NASA Astrophysics Data System (ADS)
Cao, J.; Wu, S.; He, X.
2016-12-01
Reservoir identification based on seismic data analysis is the core task in oil and gas geophysical exploration. The essence of reservoir identification is to identify the properties of rock pore fluid. We developed a novel gas reservoir identification method named seismic-print analysis by imitation of the vocal-print analysis techniques in speaker identification. The term "seismic-print" is referred to the characteristics of the seismic waveform which can identify determinedly the property of the geological objectives, for instance, a nature gas reservoir. Seismic-print can be characterized by one or a few parameters named as seismic-print parameters. It has been proven that gas reservoirs are of characteristics of negative 1-order cepstrum coefficient anomaly and Positive 2-order cepstrum coefficient anomaly, concurrently. The method is valid for sandstone gas reservoir, carbonate reservoir and shale gas reservoirs, and the accuracy rate may reach up to 90%. There are two main problems to deal with in the application of seismic-print analysis method. One is to identify the "ripple" of a reservoir on the seismogram, and another is to construct the mapping relationship between the seismic-print and the gas reservoirs. Deep learning developed in recent years is of the ability to reveal the complex non-linear relationship between the attribute and the data, and of ability to extract automatically the features of the objective from the data. Thus, deep learning could been used to deal with these two problems. There are lots of algorithms to carry out deep learning. The algorithms can be roughly divided into two categories: Belief Networks Network (DBNs) and Convolutional Neural Network (CNN). DBNs is a probabilistic generative model, which can establish a joint distribution of the observed data and tags. CNN is a feedforward neural network, which can be used to extract the 2D structure feature of the input data. Both DBNs and CNN can be used to deal with seismic data. We use an improved DBNs to identify carbonate rocks from log data, the accuracy rate can reach up to 83%. DBNs is used to deal with seismic waveform data, more information is obtained. The work was supported by NSFC under grant No. 41430323 and No. 41274128, and State Key Lab. of Oil and Gas Reservoir Geology and Exploration.
Instantaneous Frequency Attribute Comparison
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.
2013-12-01
The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.
NASA Astrophysics Data System (ADS)
Zhao, J. K.; Xu, X. S.
2017-11-01
The cutting off column and jacking technology is a method for increasing story height, which has been widely used and paid much attention in engineering. The stiffness will be changed after the process of cutting off column and jacking, which directly affects the overall seismic performance. It is usually necessary to take seismic strengthening measures to enhance the stiffness. A five story frame structure jacking project in Jinan High-tech Zone was taken as an example, and three finite element models were established which contains the frame model before lifting, after lifting and after strengthening. Based on the stiffness, the dynamic time-history analysis was carried out to research its seismic performance under the EL-Centro seismic wave, the Taft seismic wave and the Tianjin artificial seismic wave. The research can provide some guidance for the design and construction of the entire jack lifting structure.
NASA Astrophysics Data System (ADS)
Cheng, Fei; Liu, Jiangping; Wang, Jing; Zong, Yuquan; Yu, Mingyu
2016-11-01
A boulder stone, a common geological feature in south China, is referred to the remnant of a granite body which has been unevenly weathered. Undetected boulders could adversely impact the schedule and safety of subway construction when using tunnel boring machine (TBM) method. Therefore, boulder detection has always been a key issue demanded to be solved before the construction. Nowadays, cross-hole seismic tomography is a high resolution technique capable of boulder detection, however, the method can only solve for velocity in a 2-D slice between two wells, and the size and central position of the boulder are generally difficult to be accurately obtained. In this paper, the authors conduct a multi-hole wave field simulation and characteristic analysis of a boulder model based on the 3-D elastic wave staggered-grid finite difference theory, and also a 2-D imaging analysis based on first arrival travel time. The results indicate that (1) full wave field records could be obtained from multi-hole seismic wave simulations. Simulation results describe that the seismic wave propagation pattern in cross-hole high-velocity spherical geological bodies is more detailed and can serve as a basis for the wave field analysis. (2) When a cross-hole seismic section cuts through the boulder, the proposed method provides satisfactory cross-hole tomography results; however, when the section is closely positioned to the boulder, such high-velocity object in the 3-D space would impact on the surrounding wave field. The received diffracted wave interferes with the primary wave and in consequence the picked first arrival travel time is not derived from the profile, which results in a false appearance of high-velocity geology features. Finally, the results of 2-D analysis in 3-D modeling space are comparatively analyzed with the physical model test vis-a-vis the effect of high velocity body on the seismic tomographic measurements.
Structural vibration passive control and economic analysis of a high-rise building in Beijing
NASA Astrophysics Data System (ADS)
Chen, Yongqi; Cao, Tiezhu; Ma, Liangzhe; Luo, Chaoying
2009-12-01
Performance analysis of the Pangu Plaza under earthquake and wind loads is described in this paper. The plaza is a 39-story steel high-rise building, 191 m high, located in Beijing close to the 2008 Olympic main stadium. It has both fluid viscous dampers (FVDs) and buckling restrained braces or unbonded brace (BRB or UBB) installed. A repeated iteration procedure in its design and analysis was adopted for optimization. Results from the seismic response analysis in the horizontal and vertical directions show that the FVDs are highly effective in reducing the response of both the main structure and the secondary system. A comparative analysis of structural seismic performance and economic impact was conducted using traditional methods, i.e., increased size of steel columns and beams and/or use of an increased number of seismic braces versus using FVD. Both the structural response and economic analysis show that using FVD to absorb seismic energy not only satisfies the Chinese seismic design code for a “rare” earthquake, but is also the most economical way to improve seismic performance both for one-time direct investment and long term maintenance.
Discrimination of porosity and fluid saturation using seismic velocity analysis
Berryman, James G.
2001-01-01
The method of the invention is employed for determining the state of saturation in a subterranean formation using only seismic velocity measurements (e.g., shear and compressional wave velocity data). Seismic velocity data collected from a region of the formation of like solid material properties can provide relatively accurate partial saturation data derived from a well-defined triangle plotted in a (.rho./.mu., .lambda./.mu.)-plane. When the seismic velocity data are collected over a large region of a formation having both like and unlike materials, the method first distinguishes the like materials by initially plotting the seismic velocity data in a (.rho./.lambda., .mu./.lambda.)-plane to determine regions of the formation having like solid material properties and porosity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tibuleac, Ileana
2016-06-30
A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. The material included in this report demonstrates that, with the advantage of initial S-velocity models estimated from ambient noise surface waves, the seismic reflection survey, although with lower resolution, reproduces the results of the active survey when the ambient seismic noise is not contaminated by strong cultural noise. Ambient noise resolution is less at depth (below 1000m) compared to the active survey. In general, the results are promising and useful information can be recovered from ambient seismic noise,more » including dipping features and fault locations.« less
NASA Astrophysics Data System (ADS)
Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.
2016-04-01
This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of the source, which can be used, by means of the small-intensity precursors, for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise.
NASA Astrophysics Data System (ADS)
Uno, Kunihiko; Otsuka, Hisanori; Mitou, Masaaki
The pile foundation is heavily damaged at the boundary division of the ground types, liquefied ground and non-liquefied ground, during an earthquake and there is a possibility of the collapse of the piles. In this study, we conduct a shaking table test and effective stress analysis of the influence of soil liquefaction and the seismic inertial force exerted on the pile foundation. When the intermediate part of the pile, there is at the boundary division, is subjected to section force, this part increases in size as compared to the pile head in certain instances. Further, we develop a seismic resistance method for a pile foundation in liquefaction using seismic isolation rubber and it is shown the middle part seismic isolation system is very effective.
NASA Astrophysics Data System (ADS)
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
A Comprehensive Seismic Characterization of the Cove Fort-Sulphurdale Geothermal Site, Utah
NASA Astrophysics Data System (ADS)
Zhang, H.; Li, J.; Zhang, X.; Liu, Y.; Kuleli, H. S.; Toksoz, M. N.
2012-12-01
The Cove Fort-Sulphurdale geothermal area is located in the transition zone between the extensional Basin and Range Province to the west and the uplifted Colorado Plateau to the east. The region around the geothermal site has the highest heat flow values of over 260 mWm-2 in Utah. To better understand the structure around the geothermal site, the MIT group deployed 10 seismic stations for a period of one year from August 2010. The local seismic network detected over 500 local earthquakes, from which ~200 events located within the network were selected for further analysis. Our seismic analysis is focused on three aspects: seismic velocity and attenuation tomography, seismic event focal mechanism analysis, and seismic shear wave splitting analysis. First P- and S-wave arrivals are picked manually and then the waveform cross-correlation technique is applied to obtain more accurate differential times between event pairs observed on common stations. The double-difference tomography method of Zhang and Thurber (2003) is used to simultaneously determine Vp and Vs models and seismic event locations. For the attenuation tomography, we first calculate t* values from spectrum fitting and then invert them to get Q models based on known velocity models and seismic event locations. Due to the limited station coverage and relatively low signal to noise ratio, many seismic waveforms do not have clear first P arrival polarities and as a result the conventional focal mechanism determination method relying on the polarity information is not applicable. Therefore, we used the full waveform matching method of Li et al. (2010) to determine event focal mechanisms. For the shear wave splitting analysis, we used the cross-correlation method to determine the delay times between fast and slow shear waves and the polarization angles of fast shear waves. The delay times are further taken to image the anisotropy percentage distribution in three dimensions using the shear wave splitting tomography method of Zhang et al. (2007). For the study region, overall the velocity is lower and attenuation is higher in the western part. Correspondingly, the anisotropy is also stronger, indicating the fractures may be more developed in the western part. The average fast polarization directions of fast shear waves at each station mostly point NNE. From the focal mechanism analysis from selected events, it shows that the normal faulting events have strikes in NNE direction, and the events with strike slip mechanism have strikes either parallel with the NNE trending faults or their conjugate ones. Assuming the maximum horizontal stress (SHmax) is parallel with the strike of the normal faulting events and bisects the two fault planes of the strike-slip events, the inverted source mechanism suggests a NNE oriented maximum horizontal stress regime. This area is under W-E tensional stress, which means maximum compressional stress should be in the N-E or NNE direction in general. The combination of shear wave splitting and focal mechanism analysis suggests that in this region the faults and fractures are aligned in the NNE direction.
2011-09-01
No. BAA09-69 ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km...NNSA). 14. ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km depth...generated using the direct Green’s function (DGF) method of Friederich and Dalkolmo (1995). This method synthesizes the seismic wavefield for a spherically
A seismic analysis for masonry constructions: The different schematization methods of masonry walls
NASA Astrophysics Data System (ADS)
Olivito, Renato. S.; Codispoti, Rosamaria; Scuro, Carmelo
2017-11-01
Seismic analysis of masonry structures is usually analyzed through the use of structural calculation software based on equivalent frames method or to macro-elements method. In these approaches, the masonry walls are divided into vertical elements, masonry walls, and horizontal elements, so-called spandrel elements, interconnected by rigid nodes. The aim of this work is to make a critical comparison between different schematization methods of masonry wall underlining the structural importance of the spandrel elements. In order to implement the methods, two different structural calculation software were used and an existing masonry building has been examined.
Post-blasting seismicity in Rudna copper mine, Poland - source parameters analysis.
NASA Astrophysics Data System (ADS)
Caputa, Alicja; Rudziński, Łukasz; Talaga, Adam
2017-04-01
The really important hazard in Polish copper mines is high seismicity and corresponding rockbursts. Many methods are used to reduce the seismic hazard. Among others the most effective is preventing blasting in potentially hazardous mining panels. The method is expected to provoke small moderate tremors (up to M2.0) and reduce in this way a stress accumulation in the rockmass. This work presents an analysis, which deals with post-blasting events in Rudna copper mine, Poland. Using the Full Moment Tensor (MT) inversion and seismic spectra analysis, we try to find some characteristic features of post blasting seismic sources. Source parameters estimated for post-blasting events are compared with the parameters of not-provoked mining events that occurred in the vicinity of the provoked sources. Our studies show that focal mechanisms of events which occurred after blasts have similar MT decompositions, namely are characterized by a quite strong isotropic component as compared with the isotropic component of not-provoked events. Also source parameters obtained from spectral analysis show that provoked seismicity has a specific source physics. Among others, it is visible from S to P wave energy ratio, which is higher for not-provoked events. The comparison of all our results reveals a three possible groups of sources: a) occurred just after blasts, b) occurred from 5min to 24h after blasts and c) not-provoked seismicity (more than 24h after blasting). Acknowledgements: This work was supported within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.
CORSSA: The Community Online Resource for Statistical Seismicity Analysis
Michael, Andrew J.; Wiemer, Stefan
2010-01-01
Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)
NASA Astrophysics Data System (ADS)
Sullivan, T. J.
2012-04-01
The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.
Accurately determining direction of arrival by seismic array based on compressive sensing
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Yu, H.
2016-12-01
Seismic array analysis method plays an important role in detecting weak signals and determining their locations and rupturing process. In these applications, reliably estimating direction of arrival (DOA) for the seismic wave is very important. DOA is generally determined by the conventional beamforming method (CBM) [Rost et al, 2000]. However, for a fixed seismic array generally the resolution of CBM is poor in the case of low-frequency seismic signals, and in the case of high frequency seismic signals the CBM may produce many local peaks, making it difficult to pick the one corresponding to true DOA. In this study, we develop a new seismic array method based on compressive sensing (CS) to determine the DOA with high resolution for both low- and high-frequency seismic signals. The new method takes advantage of the space sparsity of the incoming wavefronts. The CS method has been successfully used to determine spatial and temporal earthquake rupturing distributions with seismic array [Yao et al, 2011;Yao et al, 2013;Yin 2016]. In this method, we first form the problem of solving the DOA as a L1-norm minimization problem. The measurement matrix for CS is constructed by dividing the slowness-angle domain into many grid nodes, which needs to satisfy restricted isometry property (RIP) for optimized reconstruction of the image. The L1-norm minimization is solved by the interior point method. We first test the CS-based DOA array determination method on synthetic data constructed based on Shanghai seismic array. Compared to the CBM, synthetic test for data without noise shows that the new method can determine the true DOA with a super-high resolution. In the case of multiple sources, the new method can easily separate multiple DOAs. When data are contaminated by noise at various levels, the CS method is stable when the noise amplitude is lower than the signal amplitude. We also test the CS method for the Wenchuan earthquake. For different arrays with different apertures, we are able to obtain reliable DOAs with uncertainties lower than 10 degrees.
Seismic instantaneous frequency extraction based on the SST-MAW
NASA Astrophysics Data System (ADS)
Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping
2018-06-01
The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.
NASA Astrophysics Data System (ADS)
Maurya, S. P.; Singh, K. H.; Singh, N. P.
2018-05-01
In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.
USDA-ARS?s Scientific Manuscript database
The objective of the paper is to study the temporal variations of the subsurface soil properties due to seasonal and weather effects using a combination of a new seismic surface method and an existing acoustic probe system. A laser Doppler vibrometer (LDV) based multi-channel analysis of surface wav...
NASA Astrophysics Data System (ADS)
Schmelzbach, C.; Sollberger, D.; Greenhalgh, S.; Van Renterghem, C.; Robertsson, J. O. A.
2017-12-01
Polarization analysis of standard three-component (3C) seismic data is an established tool to determine the propagation directions of seismic waves recorded by a single station. A major limitation of seismic direction finding methods using 3C recordings, however, is that a correct propagation-direction determination is only possible if the wave mode is known. Furthermore, 3C polarization analysis techniques break down in the presence of coherent noise (i.e., when more than one event is present in the analysis time window). Recent advances in sensor technology (e.g., fibre-optical, magnetohydrodynamic angular rate sensors, and ring laser gyroscopes) have made it possible to accurately measure all three components of rotational ground motion exhibited by seismic waves, in addition to the conventionally recorded three components of translational motion. Here, we present an extension of the theory of single station 3C polarization analysis to six-component (6C) recordings of collocated translational and rotational ground motions. We demonstrate that the information contained in rotation measurements can help to overcome some of the main limitations of standard 3C seismic direction finding, such as handling multiple arrivals simultaneously. We show that the 6C polarisation of elastic waves measured at the Earth's free surface does not only depend on the seismic wave type and propagation direction, but also on the local P- and S-wave velocities just beneath the recording station. Using an adaptation of the multiple signal classification algorithm (MUSIC), we demonstrate how seismic events can univocally be identified and characterized in terms of their wave type. Furthermore, we show how the local velocities can be inferred from single-station 6C data, in addition to the direction angles (inclination and azimuth) of seismic arrivals. A major benefit of our proposed 6C method is that it also allows the accurate recovery of the wave type, propagation directions, and phase velocities of multiple, interfering arrivals in one time window. We demonstrate how this property can be exploited to separate the wavefield into its elastic wave-modes and to isolate or suppress waves arriving from specific directions (directional filtering), both in a fully automated fashion.
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Seismic refraction analysis: the path forward
Haines, Seth S.; Zelt, Colin; Doll, William
2012-01-01
Seismic Refraction Methods: Unleashing the Potential and Understanding the Limitations; Tucson, Arizona, 29 March 2012 A workshop focused on seismic refraction methods took place on 29 May 2012, associated with the 2012 Symposium on the Application of Geophysics to Engineering and Environmental Problems. This workshop was convened to assess the current state of the science and discuss paths forward, with a primary focus on near-surface problems but with an eye on all applications. The agenda included talks on these topics from a number of experts interspersed with discussion and a dedicated discussion period to finish the day. Discussion proved lively at times, and workshop participants delved into many topics central to seismic refraction work.
NASA Astrophysics Data System (ADS)
Wang, Zhenming; Shi, Baoping; Kiefer, John D.; Woolery, Edward W.
2004-06-01
Musson's comments on our article, ``Communicating with uncertainty: A critical issue with probabilistic seismic hazard analysis'' are an example of myths and misunderstandings. We did not say that probabilistic seismic hazard analysis (PSHA) is a bad method, but we did say that it has some limitations that have significant implications. Our response to these comments follows. There is no consensus on exactly how to select seismological parameters and to assign weights in PSHA. This was one of the conclusions reached by a senior seismic hazard analysis committee [SSHAC, 1997] that included C. A. Cornell, founder of the PSHA methodology. The SSHAC report was reviewed by a panel of the National Research Council and was well accepted by seismologists and engineers. As an example of the lack of consensus, Toro and Silva [2001] produced seismic hazard maps for the central United States region that are quite different from those produced by Frankel et al. [2002] because they used different input seismological parameters and weights (see Table 1). We disagree with Musson's conclusion that ``because a method may be applied badly on one occasion does not mean the method itself is bad.'' We do not say that the method is poor, but rather that those who use PSHA need to document their inputs and communicate them fully to the users. It seems that Musson is trying to create myth by suggesting his own methods should be used.
Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes
NASA Astrophysics Data System (ADS)
Morozov, Yu. V.; Spektor, A. A.
2017-11-01
A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.
NASA Astrophysics Data System (ADS)
Gnyp, Andriy
2009-06-01
Based on the results of application of correlation analysis to records of the 2005 Mukacheve group of recurrent events and their subsequent relocation relative to the reference event of 7 July 2005, a conclusion has been drawn that all the events had most likely occurred on the same rup-ture plane. Station terms have been estimated for seismic stations of the Transcarpathians, accounting for variation of seismic velocities beneath their locations as compared to the travel time tables used in the study. In methodical aspect, potentials and usefulness of correlation analysis of seismic records for a more detailed study of seismic processes, tectonics and geodynamics of the Carpathian region have been demonstrated.
Seismic passive earth resistance using modified pseudo-dynamic method
NASA Astrophysics Data System (ADS)
Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.
2017-04-01
In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.
NASA Astrophysics Data System (ADS)
Tibuleac, I. M.; Iovenitti, J. L.; Pullammanappallil, S. K.; von Seggern, D. H.; Ibser, H.; Shaw, D.; McLachlan, H.
2015-12-01
A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. Seismic interferometry was used to extract Green's Functions (P and surface waves) from 21 days of continuous ambient seismic noise. With the advantage of S-velocity models estimated from surface waves, an ambient noise seismic reflection survey along a line (named Line 2), although with lower resolution, reproduced the results of the active survey, when the ambient seismic noise was not contaminated by strong cultural noise. Ambient noise resolution was less at depth (below 1000m) compared to the active survey. Useful information could be recovered from ambient seismic noise, including dipping features and fault locations. Processing method tests were developed, with potential to improve the virtual reflection survey results. Through innovative signal processing techniques, periods not typically analyzed with high frequency sensors were used in this study to obtain seismic velocity model information to a depth of 1.4km. New seismic parameters such as Green's Function reflection component lateral variations, waveform entropy, stochastic parameters (Correlation Length and Hurst number) and spectral frequency content extracted from active and passive surveys showed potential to indicate geothermal favorability through their correlation with high temperature anomalies, and showed potential as fault indicators, thus reducing the uncertainty in fault identification. Geothermal favorability maps along ambient seismic Line 2 were generated considering temperature, lithology and the seismic parameters investigated in this study and compared to the active Line 2 results. Pseudo-favorability maps were also generated using only the seismic parameters analyzed in this study.
Analysis of the Noise in Data from the Mt. Meron Array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D. H.; Breitfeller, E.
2010-07-15
This memo describes an analysis of the noise in data obtained from the Mt. Meron seismic array in northern Israel. The overall objective is to development a method for removing noise from extraneous sources in the environment, increasing the sensitivity to seismic signals from far events. For this initial work, we concentrated on understanding the propagation characteristics of the noise in the frequency band from 0.1 – 8 Hz, and testing a model-based method for removing narrow band (single frequency) noise.
NASA Astrophysics Data System (ADS)
Heckels, R. EG; Savage, M. K.; Townend, J.
2018-05-01
Quantifying seismic velocity changes following large earthquakes can provide insights into fault healing and reloading processes. This study presents temporal velocity changes detected following the 2010 September Mw 7.1 Darfield event in Canterbury, New Zealand. We use continuous waveform data from several temporary seismic networks lying on and surrounding the Greendale Fault, with a maximum interstation distance of 156 km. Nine-component, day-long Green's functions were computed for frequencies between 0.1 and 1.0 Hz for continuous seismic records from immediately after the 2010 September 04 earthquake until 2011 January 10. Using the moving-window cross-spectral method, seismic velocity changes were calculated. Over the study period, an increase in seismic velocity of 0.14 ± 0.04 per cent was determined near the Greendale Fault, providing a new constraint on post-seismic relaxation rates in the region. A depth analysis further showed that velocity changes were confined to the uppermost 5 km of the subsurface. We attribute the observed changes to post-seismic relaxation via crack healing of the Greendale Fault and throughout the surrounding region.
Tunnel Detection Using Seismic Methods
NASA Astrophysics Data System (ADS)
Miller, R.; Park, C. B.; Xia, J.; Ivanov, J.; Steeples, D. W.; Ryden, N.; Ballard, R. F.; Llopis, J. L.; Anderson, T. S.; Moran, M. L.; Ketcham, S. A.
2006-05-01
Surface seismic methods have shown great promise for use in detecting clandestine tunnels in areas where unauthorized movement beneath secure boundaries have been or are a matter of concern for authorities. Unauthorized infiltration beneath national borders and into or out of secure facilities is possible at many sites by tunneling. Developments in acquisition, processing, and analysis techniques using multi-channel seismic imaging have opened the door to a vast number of near-surface applications including anomaly detection and delineation, specifically tunnels. Body waves have great potential based on modeling and very preliminary empirical studies trying to capitalize on diffracted energy. A primary limitation of all seismic energy is the natural attenuation of high-frequency energy by earth materials and the difficulty in transmitting a high- amplitude source pulse with a broad spectrum above 500 Hz into the earth. Surface waves have shown great potential since the development of multi-channel analysis methods (e.g., MASW). Both shear-wave velocity and backscatter energy from surface waves have been shown through modeling and empirical studies to have great promise in detecting the presence of anomalies, such as tunnels. Success in developing and evaluating various seismic approaches for detecting tunnels relies on investigations at known tunnel locations, in a variety of geologic settings, employing a wide range of seismic methods, and targeting a range of uniquely different tunnel geometries, characteristics, and host lithologies. Body-wave research at the Moffat tunnels in Winter Park, Colorado, provided well-defined diffraction-looking events that correlated with the subsurface location of the tunnel complex. Natural voids related to karst have been studied in Kansas, Oklahoma, Alabama, and Florida using shear-wave velocity imaging techniques based on the MASW approach. Manmade tunnels, culverts, and crawl spaces have been the target of multi-modal analysis in Kansas and California. Clandestine tunnels used for illegal entry into the U.S. from Mexico were studied at two different sites along the southern border of California. All these studies represent the empirical basis for suggesting surface seismic has a significant role to play in tunnel detection and that methods are under development and very nearly at hand that will provide an effective tool in appraising and maintaining parameter security. As broadband sources, gravity-coupled towed spreads, and automated analysis software continues to make advancements, so does the applicability of routine deployment of seismic imaging systems that can be operated by technicians with interpretation aids for nearly real-time target selection. Key to making these systems commercial is the development of enhanced imaging techniques in geologically noisy areas and highly variable surface terrain.
Seismic risk assessment and application in the central United States
Wang, Z.
2011-01-01
Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.
Geophysical remote sensing of water reservoirs suitable for desalinization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David Franklin; Bartel, Lewis Clark; Bonal, Nedra
2009-12-01
In many parts of the United States, as well as other regions of the world, competing demands for fresh water or water suitable for desalination are outstripping sustainable supplies. In these areas, new water supplies are necessary to sustain economic development and agricultural uses, as well as support expanding populations, particularly in the Southwestern United States. Increasing the supply of water will more than likely come through desalinization of water reservoirs that are not suitable for present use. Surface-deployed seismic and electromagnetic (EM) methods have the potential for addressing these critical issues within large volumes of an aquifer at amore » lower cost than drilling and sampling. However, for detailed analysis of the water quality, some sampling utilizing boreholes would be required with geophysical methods being employed to extrapolate these sampled results to non-sampled regions of the aquifer. The research in this report addresses using seismic and EM methods in two complimentary ways to aid in the identification of water reservoirs that are suitable for desalinization. The first method uses the seismic data to constrain the earth structure so that detailed EM modeling can estimate the pore water conductivity, and hence the salinity. The second method utilizes the coupling of seismic and EM waves through the seismo-electric (conversion of seismic energy to electrical energy) and the electro-seismic (conversion of electrical energy to seismic energy) to estimate the salinity of the target aquifer. Analytic 1D solutions to coupled pressure and electric wave propagation demonstrate the types of waves one expects when using a seismic or electric source. A 2D seismo-electric/electro-seismic is developed to demonstrate the coupled seismic and EM system. For finite-difference modeling, the seismic and EM wave propagation algorithms are on different spatial and temporal scales. We present a method to solve multiple, finite-difference physics problems that has application beyond the present use. A limited field experiment was conducted to assess the seismo-electric effect. Due to a variety of problems, the observation of the electric field due to a seismic source is not definitive.« less
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
Microseismicity of Blawan hydrothermal complex, Bondowoso, East Java, Indonesia
NASA Astrophysics Data System (ADS)
Maryanto, S.
2018-03-01
Peak Ground Acceleration (PGA), hypocentre, and epicentre of Blawan hydrothermal complex have been analysed in order to investigate its seismicity. PGA has been determined based on Fukushima-Tanaka method and the source location of microseismic estimated using particle motion method. PGA ranged between 0.095-0.323 g and tends to be higher in the formation that containing not compacted rocks. The seismic vulnerability index region indicated that the zone with high PGA also has a high seismic vulnerability index. This was because the rocks making up these zones were inclined soft and low-density rocks. For seismic sources around the area, epicentre and hypocentre, have estimated base on seismic particle motion method of single station. The stations used in this study were mobile stations identified as BL01, BL02, BL03, BL05, BL06, BL07 and BL08. The results of the analysis particle motion obtained 44 points epicentre and the depth of the sources about 15 – 110 meters below ground surface.
Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.
Rockfall induced seismic signals: case study in Montserrat, Catalonia
NASA Astrophysics Data System (ADS)
Vilajosana, I.; Suriñach, E.; Abellán, A.; Khazaradze, G.; Garcia, D.; Llosa, J.
2008-08-01
After a rockfall event, a usual post event survey includes qualitative volume estimation, trajectory mapping and determination of departing zones. However, quantitative measurements are not usually made. Additional relevant quantitative information could be useful in determining the spatial occurrence of rockfall events and help us in quantifying their size. Seismic measurements could be suitable for detection purposes since they are non invasive methods and are relatively inexpensive. Moreover, seismic techniques could provide important information on rockfall size and location of impacts. On 14 February 2007 the Avalanche Group of the University of Barcelona obtained the seismic data generated by an artificially triggered rockfall event at the Montserrat massif (near Barcelona, Spain) carried out in order to purge a slope. Two 3 component seismic stations were deployed in the area about 200 m from the explosion point that triggered the rockfall. Seismic signals and video images were simultaneously obtained. The initial volume of the rockfall was estimated to be 75 m3 by laser scanner data analysis. After the explosion, dozens of boulders ranging from 10-4 to 5 m3 in volume impacted on the ground at different locations. The blocks fell down onto a terrace, 120 m below the release zone. The impact generated a small continuous mass movement composed of a mixture of rocks, sand and dust that ran down the slope and impacted on the road 60 m below. Time, time-frequency evolution and particle motion analysis of the seismic records and seismic energy estimation were performed. The results are as follows: 1 A rockfall event generates seismic signals with specific characteristics in the time domain; 2 the seismic signals generated by the mass movement show a time-frequency evolution different from that of other seismogenic sources (e.g. earthquakes, explosions or a single rock impact). This feature could be used for detection purposes; 3 particle motion plot analysis shows that the procedure to locate the rock impact using two stations is feasible; 4 The feasibility and validity of seismic methods for the detection of rockfall events, their localization and size determination are comfirmed.
Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)
NASA Astrophysics Data System (ADS)
Wang, Y.; Morozov, I. B.
2016-12-01
Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.
NASA Astrophysics Data System (ADS)
Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.
2018-07-01
The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.
NASA Astrophysics Data System (ADS)
Wan, Sheng; Li, Hui
2018-03-01
Though the test of blasting vibration, the blasting seismic wave propagation laws in southern granite pumped storage power project are studied. Attenuation coefficient of seismic wave and factors coefficient are acquired by the method of least squares regression analysis according to Sadaovsky empirical formula, and the empirical formula of seismic wave is obtained. This paper mainly discusses on the test of blasting vibration and the procedure of calculation. Our practice might as well serve as a reference for similar projects to come.
Lasocki, Stanislaw; Antoniuk, Janusz; Moscicki, Jerzy
2003-08-01
The Zelazny Most depository of wastes from copper-ore processing, located in southwest Poland, is the largest mineral wastes repository in Europe. Moreover, it is located in a seismically active area. The seismicity is induced and is connected with mining works in the nearby underground copper mines. Any release of the contents of the repository to the environment could have devastating and even catastrophic consequences. For this reason, geophysical methods are used for continuous monitoring the state of the repository containment dams. The article presents examples of the application of geoelectric methods for detecting sites of leakage of contaminated water and a sketch of the seismic hazard analysis, which was used to predict future seismic vibrations of the repository dams.
NASA Astrophysics Data System (ADS)
Caudron, Corentin; Taisne, Benoit; Kugaenko, Yulia; Saltykov, Vadim
2015-12-01
In contrast of the 1975-76 Tolbachik eruption, the 2012-13 Tolbachik eruption was not preceded by any striking change in seismic activity. By processing the Klyuchevskoy volcano group seismic data with the Seismic Amplitude Ratio Analysis (SARA) method, we gain insights into the dynamics of magma movement prior to this important eruption. A clear seismic migration within the seismic swarm, started 20 hours before the reported eruption onset (05:15 UTC, 26 November 2012). This migration proceeded in different phases and ended when eruptive tremor, corresponding to lava flows, was recorded (at 11:00 UTC, 27 November 2012). In order to get a first order approximation of the magma location, we compare the calculated seismic intensity ratios with the theoretical ones. As expected, the observations suggest that the seismicity migrated toward the eruption location. However, we explain the pre-eruptive observed ratios by a vertical migration under the northern slope of Plosky Tolbachik volcano followed by a lateral migration toward the eruptive vents. Another migration is also captured by this technique and coincides with a seismic swarm that started 16-20 km to the south of Plosky Tolbachik at 20:31 UTC on November 28 and lasted for more than 2 days. This seismic swarm is very similar to the seismicity preceding the 1975-76 Tolbachik eruption and can be considered as a possible aborted eruption.
Attenuation and velocity dispersion in the exploration seismic frequency band
NASA Astrophysics Data System (ADS)
Sun, Langqiu
In an anelastic medium, seismic waves are distorted by attenuation and velocity dispersion, which depend on petrophysical properties of reservoir rocks. The effective attenuation and velocity dispersion is a combination of intrinsic attenuation and apparent attenuation due to scattering, transmission response, and data acquisition system. Velocity dispersion is usually neglected in seismic data processing partly because of insufficient observations in the exploration seismic frequency band. This thesis investigates the methods of measuring velocity dispersion in the exploration seismic frequency band and interprets the velocity dispersion data in terms of petrophysical properties. Broadband, uncorrelated vibrator data are suitable for measuring velocity dispersion in the exploration seismic frequency band, and a broad bandwidth optimizes the observability of velocity dispersion. Four methods of measuring velocity dispersion in uncorrelated vibrator VSP data are investigated, which are the sliding window crosscorrelation (SWCC) method, the instantaneous phase method, the spectral decomposition method, and the cross spectrum method. Among them, the SWCC method is a new method and has satisfactory robustness, accuracy, and efficiency. Using the SWCC method, velocity dispersion is measured in the uncorrelated vibrator VSP data from three areas with different geological settings, i.e., Mallik gas hydrate zone, McArthur River uranium mines, and Outokumpu crystalline rocks. The observed velocity dispersion is fitted to a straight line with respect to log frequency for a constant (frequency-independent) Q value. This provides an alternative method for calculating Q. A constant Q value does not directly link to petrophysical properties. A modeling study is implemented for the Mallik and McArthur River data to interpret the velocity dispersion observations in terms of petrophysical properties. The detailed multi-parameter petrophysical reservoir models are built according to the well logs; the models' parameters are adjusted by fitting the synthetic data to the observed data. In this way, seismic attenuation and velocity dispersion provide new insight into petrophysics properties at the Mallik and McArthur River sites. Potentially, observations of attenuation and velocity dispersion in the exploration seismic frequency band can improve the deconvolution process for vibrator data, Q-compensation, near-surface analysis, and first break picking for seismic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamışlıoğlu, Miraç, E-mail: m.kamislioglu@gmail.com; Külahcı, Fatih, E-mail: fatihkulahci@firat.edu.tr
Nonlinear time series analysis techniques have large application areas on the geoscience and geophysics fields. Modern nonlinear methods are provided considerable evidence for explain seismicity phenomena. In this study nonlinear time series analysis, fractal analysis and spectral analysis have been carried out for researching the chaotic behaviors of release radon gas ({sup 222}Rn) concentration occurring during seismic events. Nonlinear time series analysis methods (Lyapunov exponent, Hurst phenomenon, correlation dimension and false nearest neighbor) were applied for East Anatolian Fault Zone (EAFZ) Turkey and its surroundings where there are about 35,136 the radon measurements for each region. In this paper weremore » investigated of {sup 222}Rn behavior which it’s used in earthquake prediction studies.« less
NASA Astrophysics Data System (ADS)
Yu, H.; Gu, H.
2017-12-01
A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.
Wang, Z.
2007-01-01
Although the causes of large intraplate earthquakes are still not fully understood, they pose certain hazard and risk to societies. Estimating hazard and risk in these regions is difficult because of lack of earthquake records. The New Madrid seismic zone is one such region where large and rare intraplate earthquakes (M = 7.0 or greater) pose significant hazard and risk. Many different definitions of hazard and risk have been used, and the resulting estimates differ dramatically. In this paper, seismic hazard is defined as the natural phenomenon generated by earthquakes, such as ground motion, and is quantified by two parameters: a level of hazard and its occurrence frequency or mean recurrence interval; seismic risk is defined as the probability of occurrence of a specific level of seismic hazard over a certain time and is quantified by three parameters: probability, a level of hazard, and exposure time. Probabilistic seismic hazard analysis (PSHA), a commonly used method for estimating seismic hazard and risk, derives a relationship between a ground motion parameter and its return period (hazard curve). The return period is not an independent temporal parameter but a mathematical extrapolation of the recurrence interval of earthquakes and the uncertainty of ground motion. Therefore, it is difficult to understand and use PSHA. A new method is proposed and applied here for estimating seismic hazard in the New Madrid seismic zone. This method provides hazard estimates that are consistent with the state of our knowledge and can be easily applied to other intraplate regions. ?? 2007 The Geological Society of America.
Waveform Retrieval and Phase Identification for Seismic Data from the CASS Experiment
NASA Astrophysics Data System (ADS)
Li, Zhiwei; You, Qingyu; Ni, Sidao; Hao, Tianyao; Wang, Hongti; Zhuang, Cantao
2013-05-01
The little destruction to the deployment site and high repeatability of the Controlled Accurate Seismic Source (CASS) shows its potential for investigating seismic wave velocities in the Earth's crust. However, the difficulty in retrieving impulsive seismic waveforms from the CASS data and identifying the seismic phases substantially prevents its wide applications. For example, identification of the seismic phases and accurate measurement of travel times are essential for resolving the spatial distribution of seismic velocities in the crust. Until now, it still remains a challenging task to estimate the accurate travel times of different seismic phases from the CASS data which features extended wave trains, unlike processing of the waveforms from impulsive events such as earthquakes or explosive sources. In this study, we introduce a time-frequency analysis method to process the CASS data, and try to retrieve the seismic waveforms and identify the major seismic phases traveling through the crust. We adopt the Wigner-Ville Distribution (WVD) approach which has been used in signal detection and parameter estimation for linear frequency modulation (LFM) signals, and proves to feature the best time-frequency convergence capability. The Wigner-Hough transform (WHT) is applied to retrieve the impulsive waveforms from multi-component LFM signals, which comprise seismic phases with different arrival times. We processed the seismic data of the 40-ton CASS in the field experiment around the Xinfengjiang reservoir with the WVD and WHT methods. The results demonstrate that these methods are effective in waveform retrieval and phase identification, especially for high frequency seismic phases such as PmP and SmS with strong amplitudes in large epicenter distance of 80-120 km. Further studies are still needed to improve the accuracy on travel time estimation, so as to further promote applicability of the CASS for and imaging the seismic velocity structure.
Reservoir Identification: Parameter Characterization or Feature Classification
NASA Astrophysics Data System (ADS)
Cao, J.
2017-12-01
The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601
Multichannel analysis of surface waves (MASW) - Active and passive methods
Park, C.B.; Miller, R.D.; Xia, J.; Ivanov, J.
2007-01-01
The conventional seismic approaches for near-surface investigation have usually been either high-resolution reflection or refraction surveys that deal with a depth range of a few tens to hundreds meters. Seismic signals from these surveys consist of wavelets with frequencies higher than 50 Hz. The multichannel analysis of surface waves (MASW) method deals with surface waves in the lower frequencies (e.g., 1-30 Hz) and uses a much shallower depth range of investigation (e.g., a few to a few tens of meters). ?? 2007 Society of Exploration Geophysicists.
Permafrost Active Layer Seismic Interferometry Experiment (PALSIE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Robert; Knox, Hunter Anne; James, Stephanie
2016-01-01
We present findings from a novel field experiment conducted at Poker Flat Research Range in Fairbanks, Alaska that was designed to monitor changes in active layer thickness in real time. Results are derived primarily from seismic data streaming from seven Nanometric Trillium Posthole seismometers directly buried in the upper section of the permafrost. The data were evaluated using two analysis methods: Horizontal to Vertical Spectral Ratio (HVSR) and ambient noise seismic interferometry. Results from the HVSR conclusively illustrated the method's effectiveness at determining the active layer's thickness with a single station. Investigations with the multi-station method (ambient noise seismic interferometry)more » are continuing at the University of Florida and have not yet conclusively determined active layer thickness changes. Further work continues with the Bureau of Land Management (BLM) to determine if the ground based measurements can constrain satellite imagery, which provide measurements on a much larger spatial scale.« less
Very-long-period seismic signals - filling the gap between deformation and seismicity
NASA Astrophysics Data System (ADS)
Neuberg, Jurgen; Smith, Paddy
2013-04-01
Good broadband seismic sensors are capable to record seismic transients with dominant wavelengths of several tens or even hundreds of seconds. This allows us to generate a multi-component record of seismic volcanic events that are located in between the conventional high to low-frequency seismic spectrum and deformation signals. With a much higher temporal resolution and accuracy than e.g. GPS records, these signals fill the gap between seismicity and deformation studies. In this contribution we will review the non-trivial processing steps necessary to retrieve ground deformation from the original velocity seismogram and explore which role the resulting displacement signals have in the analysis of volcanic events. We use examples from Soufriere Hills volcano in Montserrat, West Indies, to discuss the benefits and shortcomings of such methods regarding new insights into volcanic processes.
NASA Astrophysics Data System (ADS)
Formisano, Antonio; Chiumiento, Giovanni; Fabbrocino, Francesco; Landolfo, Raffaele
2017-07-01
The general objective of the work is to draw attention to the issue of seismic vulnerability analysis of masonry building compounds, which characterise most of the Italian historic towns. The study is based on the analysis of an aggregated construction falling in the town of Arsita (Teramo, Italy) damaged after the 2009 L'Aquila earthquake. A comparison between the seismic verifications carried out by using the 3Muri commercial software and those deriving from the application of the Italian Guidelines on Cultural Heritage has been performed. The comparison has shown that Guidelines provide results on the safe side in predicting the seismic behaviour of the building compound under study. Further analyses should be performed aiming at suggesting some modifications of the used simplified calculation method to better interpret the behaviour of building compounds under earthquake.
NASA Astrophysics Data System (ADS)
Rekapalli, Rajesh; Tiwari, R. K.; Sen, Mrinal K.; Vedanti, Nimisha
2017-05-01
Noises and data gaps complicate the seismic data processing and subsequently cause difficulties in the geological interpretation. We discuss a recent development and application of the Multi-channel Time Slice Singular Spectrum Analysis (MTSSSA) for 3D seismic data de-noising in time domain. In addition, L1 norm based simultaneous data gap filling of 3D seismic data using MTSSSA also discussed. We discriminated the noises from single individual time slices of 3D volumes by analyzing Eigen triplets of the trajectory matrix. We first tested the efficacy of the method on 3D synthetic seismic data contaminated with noise and then applied to the post stack seismic reflection data acquired from the Sleipner CO2 storage site (pre and post CO2 injection) from Norway. Our analysis suggests that the MTSSSA algorithm is efficient to enhance the S/N for better identification of amplitude anomalies along with simultaneous data gap filling. The bright spots identified in the de-noised data indicate upward migration of CO2 towards the top of the Utsira formation. The reflections identified applying MTSSSA to pre and post injection data correlate well with the geology of the Southern Viking Graben (SVG).
NASA Astrophysics Data System (ADS)
Jiang, T.; Yue, Y.
2017-12-01
It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.
NASA Astrophysics Data System (ADS)
Abdel Raheem, Shehata E.; Ahmed, Mohamed M.; Alazrak, Tarek M. A.
2015-03-01
Soil conditions have a great deal to do with damage to structures during earthquakes. Hence the investigation on the energy transfer mechanism from soils to buildings during earthquakes is critical for the seismic design of multi-story buildings and for upgrading existing structures. Thus, the need for research into soil-structure interaction (SSI) problems is greater than ever. Moreover, recent studies show that the effects of SSI may be detrimental to the seismic response of structure and neglecting SSI in analysis may lead to un-conservative design. Despite this, the conventional design procedure usually involves assumption of fixity at the base of foundation neglecting the flexibility of the foundation, the compressibility of the underneath soil and, consequently, the effect of foundation settlement on further redistribution of bending moment and shear force demands. Hence the SSI analysis of multi-story buildings is the main focus of this research; the effects of SSI are analyzed for typical multi-story building resting on raft foundation. Three methods of analysis are used for seismic demands evaluation of the target moment-resistant frame buildings: equivalent static load; response spectrum methods and nonlinear time history analysis with suit of nine time history records. Three-dimensional FE model is constructed to investigate the effects of different soil conditions and number of stories on the vibration characteristics and seismic response demands of building structures. Numerical results obtained using SSI model with different soil conditions are compared to those corresponding to fixed-base support modeling assumption. The peak responses of story shear, story moment, story displacement, story drift, moments at beam ends, as well as force of inner columns are analyzed. The results of different analysis approaches are used to evaluate the advantages, limitations, and ease of application of each approach for seismic analysis.
Application of Visual Attention in Seismic Attribute Analysis
NASA Astrophysics Data System (ADS)
He, M.; Gu, H.; Wang, F.
2016-12-01
It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.
New methods for engineering site characterization using reflection and surface wave seismic survey
NASA Astrophysics Data System (ADS)
Chaiprakaikeow, Susit
This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.
Pick- and waveform-based techniques for real-time detection of induced seismicity
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Scarabello, Luca; Böse, Maren; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2018-05-01
The monitoring of induced seismicity is a common operation in many industrial activities, such as conventional and non-conventional hydrocarbon production or mining and geothermal energy exploitation, to cite a few. During such operations, we generally collect very large and strongly noise-contaminated data sets that require robust and automated analysis procedures. Induced seismicity data sets are often characterized by sequences of multiple events with short interevent times or overlapping events; in these cases, pick-based location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution and incorrect magnitudes, which can have significant consequences if real-time seismicity information are used for risk assessment frameworks. To overcome these issues, different waveform-based methods for the detection and location of microseismicity have been proposed. The main advantages of waveform-based methods is that they appear to perform better and can simultaneously detect and locate seismic events providing high-quality locations in a single step, while the main disadvantage is that they are computationally expensive. Although these methods have been applied to different induced seismicity data sets, an extensive comparison with sophisticated pick-based detection methods is still missing. In this work, we introduce our improved waveform-based detector and we compare its performance with two pick-based detectors implemented within the SeiscomP3 software suite. We test the performance of these three approaches with both synthetic and real data sets related to the induced seismicity sequence at the deep geothermal project in the vicinity of the city of St. Gallen, Switzerland.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Harris, D; Myers, S
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3Dmore » finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications.« less
Application of the Radon-FCL approach to seismic random noise suppression and signal preservation
NASA Astrophysics Data System (ADS)
Meng, Fanlei; Li, Yue; Liu, Yanping; Tian, Yanan; Wu, Ning
2016-08-01
The fractal conservation law (FCL) is a linear partial differential equation that is modified by an anti-diffusive term of lower order. The analysis indicated that this algorithm could eliminate high frequencies and preserve or amplify low/medium-frequencies. Thus, this method is quite suitable for the simultaneous noise suppression and enhancement or preservation of seismic signals. However, the conventional FCL filters seismic data only along the time direction, thereby ignoring the spatial coherence between neighbouring traces, which leads to the loss of directional information. Therefore, we consider the development of the conventional FCL into the time-space domain and propose a Radon-FCL approach. We applied a Radon transform to implement the FCL method in this article; performing FCL filtering in the Radon domain achieves a higher level of noise attenuation. Using this method, seismic reflection events can be recovered with the sacrifice of fewer frequency components while effectively attenuating more random noise than conventional FCL filtering. Experiments using both synthetic and common shot point data demonstrate the advantages of the Radon-FCL approach versus the conventional FCL method with regard to both random noise attenuation and seismic signal preservation.
Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.
2016-04-01
Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition, the flexibility of NDSHA allows for generation of ground shaking maps at specified long-term return times, which may permit a straightforward comparison between NDSHA and PSHA maps in terms of average rates of exceedance for specified time windows. The comparison of NDSHA and PSHA maps, particularly for very long recurrence times, may indicate to what extent probabilistic ground shaking estimates are consistent with those from physical models of seismic waves propagation. A systematic comparison over the territory of Italy is carried out exploiting the uniqueness of the Italian earthquake catalogue, a data set covering more than a millennium (a time interval about ten times longer than that available in most of the regions worldwide) with a satisfactory completeness level for M>5, which warrants the results of analysis. By analysing in some detail seismicity in the Vrancea region, we show that well constrained macroseismic field information for individual earthquakes may provide useful information about the reliability of ground shaking estimates. Finally, in order to generalise observations, the comparative analysis is extended to further regions where both standard NDSHA and PSHA maps are available (e.g. State of Gujarat, India). The final Global Seismic Hazard Assessment Program (GSHAP) results and the most recent version of Seismic Hazard Harmonization in Europe (SHARE) project maps, along with other national scale probabilistic maps, all obtained by PSHA, are considered for this comparative analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Braverman, J.; Hofmayer, C.
2010-06-30
The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structuresmore » and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. In the Year 1 scope of work, BNL collected and reviewed degradation occurrences in US NPPs and identified important aging characteristics needed for the seismic capability evaluations. This information is presented in the Annual Report for the Year 1 Task, identified as BNL Report-81741-2008 and also designated as KAERI/RR-2931/2008. The report presents results of the statistical and trending analysis of this data and compares the results to prior aging studies. In addition, the report provides a description of U.S. current regulatory requirements, regulatory guidance documents, generic communications, industry standards and guidance, and past research related to aging degradation of SSCs. In the Year 2 scope of work, BNL carried out a research effort to identify and assess degradation models for the long-term behavior of dominant materials that are determined to be risk significant to NPPs. Multiple models have been identified for concrete, carbon and low-alloy steel, and stainless steel. These models are documented in the Annual Report for the Year 2 Task, identified as BNL Report-82249-2009 and also designated as KAERI/TR-3757/2009. This report describes the research effort performed by BNL for the Year 3 scope of work. The objective is for BNL to develop the seismic fragility capacity for a condensate storage tank with various degradation scenarios. The conservative deterministic failure margin method has been utilized for the undegraded case and has been modified to accommodate the degraded cases. A total of five seismic fragility analysis cases have been described: (1) undegraded case, (2) degraded stainless tank shell, (3) degraded anchor bolts, (4) anchorage concrete cracking, and (5)a perfect combination of the three degradation scenarios. Insights from these fragility analyses are also presented.« less
Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Ye, F.; Lyu, S.; Lin, J.
2017-12-01
In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.
Picking vs Waveform based detection and location methods for induced seismicity monitoring
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2017-04-01
Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for regional and teleseismic applications, thus its performance with microseismic data might be limited. We analyze the performance of the three methodologies for a synthetic dataset with realistic noise conditions as well as for the first hour of continuous waveform data, including the Ml 3.5 St. Gallen earthquake, recorded by a microseismic network deployed in the area. We finally compare the results obtained all these three methods with a manually revised catalogue.
Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.
NASA Astrophysics Data System (ADS)
Sun, Baitao; Zhao, Hexian; Yan, Peilei
2017-08-01
The damage of masonry structures in earthquakes is generally more severe than other structures. Through the analysis of two typical earthquake damage buildings in the Wenchuan earthquake in Xuankou middle school, we found that the number of storeys and the construction measures had great influence on the seismic performance of masonry structures. This paper takes a teachers’ dormitory in Xuankou middle school as an example, selected the structure arrangement and storey number as two independent variables to design working conditions. Finally we researched on the seismic performance difference of masonry structure under two variables by finite element analysis method.
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Kamarudin, A. F.; Rosli, S.; Fauziah, A.; Akmal, M. A. K.; Aziman, M.; Azhar, A. T. S.; Ashraf, M. I. M.; Shaylinda, M. Z. N.; Rais, Y.; Ishak, M. F.; Alel, M. N. A.
2018-04-01
Geotechnical site investigation as known as subsurface profile evaluation is the process of subsurface layer characteristics determination which finally used for design and construction phase. Traditionally, site investigation was performed using drilling technique thus suffers from several limitation due to cost, time, data coverage and sustainability. In order to overcome those problems, this study adopted surface techniques using seismic refraction and ambient vibration method for subsurface profile depth evaluation. Seismic refraction data acquisition and processing was performed using ABEM Terraloc and OPTIM software respectively. Meanwhile ambient vibration data acquisition and processing was performed using CityShark II, Lennartz and GEOPSY software respectively. It was found that studied area consist of two layers representing overburden and bedrock geomaterials based on p-wave velocity value (vp = 300 – 2500 m/s and vp > 2500 m/s) and natural frequency value (Fo = 3.37 – 3.90 Hz) analyzed. Further analysis found that both methods show some good similarity in term of depth and thickness with percentage accuracy at 60 – 97%. Consequently, this study has demonstrated that the application of seismic refractin and ambient vibration method was applicable in subsurface profile depth and thickness estimation. Moreover, surface technique which consider as non-destructive method adopted in this study was able to compliment conventional drilling method in term of cost, time, data coverage and environmental sustainaibility.
Seismic hazards in Thailand: a compilation and updated probabilistic analysis
NASA Astrophysics Data System (ADS)
Pailoplee, Santi; Charusiri, Punya
2016-06-01
A probabilistic seismic hazard analysis (PSHA) for Thailand was performed and compared to those of previous works. This PSHA was based upon (1) the most up-to-date paleoseismological data (slip rates), (2) the seismic source zones, (3) the seismicity parameters ( a and b values), and (4) the strong ground-motion attenuation models suggested as being suitable models for Thailand. For the PSHA mapping, both the ground shaking and probability of exceedance (POE) were analyzed and mapped using various methods of presentation. In addition, site-specific PSHAs were demonstrated for ten major provinces within Thailand. For instance, a 2 and 10 % POE in the next 50 years of a 0.1-0.4 g and 0.1-0.2 g ground shaking, respectively, was found for western Thailand, defining this area as the most earthquake-prone region evaluated in Thailand. In a comparison between the ten selected specific provinces within Thailand, the Kanchanaburi and Tak provinces had comparatively high seismic hazards, and therefore, effective mitigation plans for these areas should be made. Although Bangkok was defined as being within a low seismic hazard in this PSHA, a further study of seismic wave amplification due to the soft soil beneath Bangkok is required.
Research on Influencing Factors and Generalized Power of Synthetic Artificial Seismic Wave
NASA Astrophysics Data System (ADS)
Jiang, Yanpei
2018-05-01
Start your abstract here… In this paper, according to the trigonometric series method, the author adopts different envelope functions and the acceleration design spectrum in Seismic Code For Urban Bridge Design to simulate the seismic acceleration time history which meets the engineering accuracy requirements by modifying and iterating the initial wave. Spectral analysis is carried out to find out the the distribution law of the changing frequencies of the energy of seismic time history and to determine the main factors that affect the acceleration amplitude spectrum and energy spectrum density. The generalized power formula of seismic time history is derived from the discrete energy integral formula and the author studied the changing characteristics of generalized power of the seismic time history under different envelop functions. Examples are analyzed to illustrate that generalized power can measure the seismic performance of bridges.
Numerical modeling of time-lapse monitoring of CO2 sequestration in a layered basalt reservoir
Khatiwada, M.; Van Wijk, K.; Clement, W.P.; Haney, M.
2008-01-01
As part of preparations in plans by The Big Sky Carbon Sequestration Partnership (BSCSP) to inject CO2 in layered basalt, we numerically investigate seismic methods as a noninvasive monitoring technique. Basalt seems to have geochemical advantages as a reservoir for CO2 storage (CO2 mineralizes quite rapidly while exposed to basalt), but poses a considerable challenge in term of seismic monitoring: strong scattering from the layering of the basalt complicates surface seismic imaging. We perform numerical tests using the Spectral Element Method (SEM) to identify possibilities and limitations of seismic monitoring of CO2 sequestration in a basalt reservoir. While surface seismic is unlikely to detect small physical changes in the reservoir due to the injection of CO2, the results from Vertical Seismic Profiling (VSP) simulations are encouraging. As a perturbation, we make a 5%; change in wave velocity, which produces significant changes in VSP images of pre-injection and post-injection conditions. Finally, we perform an analysis using Coda Wave Interferometry (CWI), to quantify these changes in the reservoir properties due to CO2 injection.
Multivariate Analysis of Seismic Field Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen
1999-06-01
This report includes the details of the model building procedure and prediction of seismic field data. Principal Components Regression, a multivariate analysis technique, was used to model seismic data collected as two pieces of equipment were cycled on and off. Models built that included only the two pieces of equipment of interest had trouble predicting data containing signals not included in the model. Evidence for poor predictions came from the prediction curves as well as spectral F-ratio plots. Once the extraneous signals were included in the model, predictions improved dramatically. While Principal Components Regression performed well for the present datamore » sets, the present data analysis suggests further work will be needed to develop more robust modeling methods as the data become more complex.« less
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
The Shock and Vibration Digest. Volume 14, Number 11
1982-11-01
cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems
Analysis of seismic stability of large-sized tank VST-20000 with software package ANSYS
NASA Astrophysics Data System (ADS)
Tarasenko, A. A.; Chepur, P. V.; Gruchenkova, A. A.
2018-05-01
The work is devoted to the study of seismic stability of vertical steel tank VST-20000 with due consideration of the system response “foundation-tank-liquid”, conducted on the basis of the finite element method, modal analysis and linear spectral theory. The calculations are performed for the tank model with a high degree of detailing of metallic structures: shells, a fixed roof, a bottom, a reinforcing ring.
NASA Astrophysics Data System (ADS)
Bai, Wen; Dai, Junwu; Zhou, Huimeng; Yang, Yongqiang; Ning, Xiaoqing
2017-10-01
Porcelain electrical equipment (PEE), such as current transformers, is critical to power supply systems, but its seismic performance during past earthquakes has not been satisfactory. This paper studies the seismic performance of two typical types of PEE and proposes a damping method for PEE based on multiple tuned mass dampers (MTMD). An MTMD damping device involving three mass units, named a triple tuned mass damper (TTMD), is designed and manufactured. Through shake table tests and finite element analysis, the dynamic characteristics of the PEE are studied and the effectiveness of the MTMD damping method is verified. The adverse influence of MTMD redundant mass to damping efficiency is studied and relevant equations are derived. MTMD robustness is verified through adjusting TTMD control frequencies. The damping effectiveness of TTMD, when the peak ground acceleration far exceeds the design value, is studied. Both shake table tests and finite element analysis indicate that MTMD is effective and robust in attenuating PEE seismic responses. TTMD remains effective when the PGA far exceeds the design value and when control deviations are considered.
NASA Astrophysics Data System (ADS)
Park, Chanho; Nguyen, Phung K. T.; Nam, Myung Jin; Kim, Jongwook
2013-04-01
Monitoring CO2 migration and storage in geological formations is important not only for the stability of geological sequestration of CO2 but also for efficient management of CO2 injection. Especially, geophysical methods can make in situ observation of CO2 to assess the potential leakage of CO2 and to improve reservoir description as well to monitor development of geologic discontinuity (i.e., fault, crack, joint, etc.). Geophysical monitoring can be based on wireline logging or surface surveys for well-scale monitoring (high resolution and nallow area of investigation) or basin-scale monitoring (low resolution and wide area of investigation). In the meantime, crosswell tomography can make reservoir-scale monitoring to bridge the resolution gap between well logs and surface measurements. This study focuses on reservoir-scale monitoring based on crosswell seismic tomography aiming describe details of reservoir structure and monitoring migration of reservoir fluid (water and CO2). For the monitoring, we first make a sensitivity analysis on crosswell seismic tomography data with respect to CO2 saturation. For the sensitivity analysis, Rock Physics Models (RPMs) are constructed by calculating the values of density and P and S-wave velocities of a virtual CO2 injection reservoir. Since the seismic velocity of the reservoir accordingly changes as CO2 saturation changes when the CO2 saturation is less than about 20%, while when the CO2 saturation is larger than 20%, the seismic velocity is insensitive to the change, sensitivity analysis is mainly made when CO2 saturation is less than 20%. For precise simulation of seismic tomography responses for constructed RPMs, we developed a time-domain 2D elastic modeling based on finite difference method with a staggered grid employing a boundary condition of a convolutional perfectly matched layer. We further make comparison between sensitivities of seismic tomography and surface measurements for RPMs to analysis resolution difference between them. Moreover, assuming a similar reservoir situation to the CO2 storage site in Nagaoka, Japan, we generate time-lapse tomographic data sets for the corresponding CO2 injection process, and make a preliminary interpretation of the data sets.
Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queen, John H.
2016-05-09
Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both surface seismic and vertical seismic profile (VSP) methods. We adapted these methods to the unique conditions encountered in Enhanced Geothermal Systems (EGS) creation. These conditions include geological environments with volcanic cover, highly altered rocks, severe structure, extreme near surface velocity contrasts and lack of distinct velocity contrasts at depth. One of the objectives was the development of methods for identifying more appropriate seismic acquisition parametersmore » for overcoming problems associated with these geological factors. Because temperatures up to 300º C are often encountered in these systems, another objective was the testing of VSP borehole tools capable of operating at depths in excess of 1,000 m and at temperatures in excess of 200º C. A final objective was the development of new processing and interpretation techniques based on scattering and time-frequency analysis, as well as the application of modern seismic migration imaging algorithms to seismic data acquired over geothermal areas. The use of surface seismic reflection data at Brady's Hot Springs was found useful in building a geological model, but only when combined with other extensive geological and geophysical data. The use of fine source and geophone spacing was critical in producing useful images. The surface seismic reflection data gave no information about the internal structure (extent, thickness and filling) of faults and fractures, and modeling suggests that they are unlikely to do so. Time-frequency analysis was applied to these data, but was not found to be significantly useful in their interpretation. Modeling does indicate that VSP and other seismic methods with sensors located at depth in wells will be the most effective seismic tools for getting information on the internal structure of faults and fractures in support of fluid flow pathway management and EGS treatment. Scattered events similar to those expected from faults and fractures are seen in the VSP reported here. Unfortunately, the source offset and well depth coverage do not allow for detailed analysis of these events. This limited coverage also precluded the use of advanced migration and imaging algorithms. More extensive acquisition is needed to support fault and fracture characterization in the geothermal reservoir at Brady's Hot Springs. The VSP was effective in generating interval velocity estimates over the depths covered by the array. Upgoing reflection events are also visible in the VSP results at locations corresponding to reflection events in the surface seismic. Overall, the high temperature rated fiber optic sensors used in the VSP produced useful results. Modeling has been found useful in the interpretation of both surface reflection seismic and VSP data. It has helped identify possible near surface scattering in the surface seismic data. It has highlighted potential scattering events from deeper faults in the VSP data. Inclusion of more detailed fault and fracture specific stiffness parameters are needed to fully interpret fault and fracture scattered events for flow properties (Pyrak-Nolte and Morris, 2000, Zhu and Snieder, 2002). Shear wave methods were applied in both the surface seismic reflection and VSP work. They were not found to be effective in the Brady's Hot Springs area. This was due to the extreme attenuation of shear waves in the near surface at Brady's. This does not imply that they will be ineffective in general. In geothermal areas where good shear waves can be recorded, modeling suggests they should be very useful for characterizing faults and fractures.« less
Quantifying the similarity of seismic polarizations
NASA Astrophysics Data System (ADS)
Jones, Joshua P.; Eaton, David W.; Caffagni, Enrico
2016-02-01
Assessing the similarities of seismic attributes can help identify tremor, low signal-to-noise (S/N) signals and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values or known seismic sources. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in S/N ratio. Measuring polarization similarity allows easy identification of site noise and sensor misalignment and can help identify coherent noise and emergent or low S/N phase arrivals. Dissimilar azimuths during phase arrivals indicate misaligned horizontal components, dissimilar incidence angles during phase arrivals indicate misaligned vertical components and dissimilar linear polarization may indicate a secondary noise source. Using records of the Mw = 8.3 Sea of Okhotsk earthquake, from Canadian National Seismic Network broad-band sensors in British Columbia and Yukon Territory, Canada, and a vertical borehole array at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Time-frequency polarization similarities of borehole data suggest that a coherent noise source may have persisted above 8 Hz several months after peak resource extraction from a `flowback' type hydraulic fracture.
78 FR 59732 - Revisions to Design of Structures, Components, Equipment, and Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
...,'' Section 3.7.2, ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1... Analysis,'' (Accession No. ML13198A223); Section 3.7.3, ``Seismic Subsystem Analysis,'' (Accession No..., ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1, ``Concrete...
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
NASA Astrophysics Data System (ADS)
Rodgers, Mel; Smith, Patrick; Pyle, David; Mather, Tamsin
2016-04-01
Understanding the transition between quiescence and eruption at dome-forming volcanoes, such as Soufrière Hills Volcano (SHV), Montserrat, is important for monitoring volcanic activity during long-lived eruptions. Statistical analysis of seismic events (e.g. spectral analysis and identification of multiplets via cross-correlation) can be useful for characterising seismicity patterns and can be a powerful tool for analysing temporal changes in behaviour. Waveform classification is crucial for volcano monitoring, but consistent classification, both during real-time analysis and for retrospective analysis of previous volcanic activity, remains a challenge. Automated classification allows consistent re-classification of events. We present a machine learning (random forest) approach to rapidly classify waveforms that requires minimal training data. We analyse the seismic precursors to the July 2008 Vulcanian explosion at SHV and show systematic changes in frequency content and multiplet behaviour that had not previously been recognised. These precursory patterns of seismicity may be interpreted as changes in pressure conditions within the conduit during magma ascent and could be linked to magma flow rates. Frequency analysis of the different waveform classes supports the growing consensus that LP and Hybrid events should be considered end members of a continuum of low-frequency source processes. By using both supervised and unsupervised machine-learning methods we investigate the nature of waveform classification and assess current classification schemes.
Seismic damage analysis of the outlet piers of arch dams using the finite element sub-model method
NASA Astrophysics Data System (ADS)
Song, Liangfeng; Wu, Mingxin; Wang, Jinting; Xu, Yanjie
2016-09-01
This study aims to analyze seismic damage of reinforced outlet piers of arch dams by the nonlinear finite element (FE) sub-model method. First, the dam-foundation system is modeled and analyzed, in which the effects of infinite foundation, contraction joints, and nonlinear concrete are taken into account. The detailed structures of the outlet pier are then simulated with a refined FE model in the sub-model analysis. In this way the damage mechanism of the plain (unreinforced) outlet pier is analyzed, and the effects of two reinforcement measures (i.e., post-tensioned anchor cables and reinforcing bar) on the dynamic damage to the outlet pier are investigated comprehensively. Results show that the plain pier is damaged severely by strong earthquakes while implementation of post-tensioned anchor cables strengthens the pier effectively. In addition, radiation damping strongly alleviates seismic damage to the piers.
Earthquake hypocenter relocation using double difference method in East Java and surrounding areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
C, Aprilia Puspita; Meteorological, Climatological, and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths ofmore » 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.« less
Visualization of volumetric seismic data
NASA Astrophysics Data System (ADS)
Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk
2015-04-01
Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.
NASA Astrophysics Data System (ADS)
Chan, J. H.; Catchings, R.; Strayer, L. M.; Goldman, M.; Criley, C.; Sickler, R. R.; Boatwright, J.
2017-12-01
We conducted an active-source seismic investigation across the Napa Valley (Napa Valley Seismic Investigation-16) in September of 2016 consisting of two basin-wide seismic profiles; one profile was 20 km long and N-S-trending (338°), and the other 15 km long and E-W-trending (80°) (see Catchings et al., 2017). Data from the NVSI-16 seismic investigation were recorded using a total of 666 vertical- and horizontal-component seismographs, spaced 100 m apart on both seismic profiles. Seismic sources were generated by a total of 36 buried explosions spaced 1 km apart. The two seismic profiles intersected in downtown Napa, where a large number of buildings were red-tagged by the City following the 24 August 2014 Mw 6.0 South Napa earthquake. From the recorded Rayleigh and Love waves, we developed 2-Dimensional S-wave velocity models to depths of about 0.5 km using the multichannel analysis of surface waves (MASW) method. Our MASW (Rayleigh) and MALW (Love) models show two prominent low-velocity (Vs = 350 to 1300 m/s) sub-basins that were also previously identified from gravity studies (Langenheim et al., 2010). These basins trend N-W and also coincide with the locations of more than 1500 red- and yellow-tagged buildings within the City of Napa that were tagged after the 2014 South Napa earthquake. The observed correlation between low-Vs, deep basins, and the red-and yellow-tagged buildings in Napa suggests similar large-scale seismic investigations can be performed. These correlations provide insights into the likely locations of significant structural damage resulting from future earthquakes that occur adjacent to or within sedimentary basins.
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Gentili, Stefania
2017-04-01
Identification and statistical characterization of seismic clusters may provide useful insights about the features of seismic energy release and their relation to physical properties of the crust within a given region. Moreover, a number of studies based on spatio-temporal analysis of main-shocks occurrence require preliminary declustering of the earthquake catalogs. Since various methods, relying on different physical/statistical assumptions, may lead to diverse classifications of earthquakes into main events and related events, we aim to investigate the classification differences among different declustering techniques. Accordingly, a formal selection and comparative analysis of earthquake clusters is carried out for the most relevant earthquakes in North-Eastern Italy, as reported in the local OGS-CRS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. The comparison is then extended to selected earthquake sequences associated with a different seismotectonic setting, namely to events that occurred in the region struck by the recent Central Italy destructive earthquakes, making use of INGV data. Various techniques, ranging from classical space-time windows methods to ad hoc manual identification of aftershocks, are applied for detection of earthquake clusters. In particular, a statistical method based on nearest-neighbor distances of events in space-time-energy domain, is considered. Results from clusters identification by the nearest-neighbor method turn out quite robust with respect to the time span of the input catalogue, as well as to minimum magnitude cutoff. The identified clusters for the largest events reported in North-Eastern Italy since 1977 are well consistent with those reported in earlier studies, which were aimed at detailed manual aftershocks identification. The study shows that the data-driven approach, based on the nearest-neighbor distances, can be satisfactorily applied to decompose the seismic catalog into background seismicity and individual sequences of earthquake clusters, also in areas characterized by moderate seismic activity, where the standard declustering techniques may turn out rather gross approximations. With these results acquired, the main statistical features of seismic clusters are explored, including complex interdependence of related events, with the aim to characterize the space-time patterns of earthquakes occurrence in North-Eastern Italy and capture their basic differences with Central Italy sequences.
NASA Astrophysics Data System (ADS)
Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.
2017-07-01
This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.
NASA Astrophysics Data System (ADS)
Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta
2017-07-01
This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.
NASA Astrophysics Data System (ADS)
Juretzek, Carina; Hadziioannou, Céline
2014-05-01
Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.
Anisotropic analysis for seismic sensitivity of groundwater monitoring wells
NASA Astrophysics Data System (ADS)
Pan, Y.; Hsu, K.
2011-12-01
Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.
Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.
2016-08-18
This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.
NASA Astrophysics Data System (ADS)
Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe
2017-04-01
In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.
NASA Astrophysics Data System (ADS)
Maity, Debotyam
This study is aimed at an improved understanding of unconventional reservoirs which include tight reservoirs (such as shale oil and gas plays), geothermal developments, etc. We provide a framework for improved fracture zone identification and mapping of the subsurface for a geothermal system by integrating data from different sources. The proposed ideas and methods were tested primarily on data obtained from North Brawley geothermal field and the Geysers geothermal field apart from synthetic datasets which were used to test new algorithms before actual application on the real datasets. The study has resulted in novel or improved algorithms for use at specific stages of data acquisition and analysis including improved phase detection technique for passive seismic (and teleseismic) data as well as optimization of passive seismic surveys for best possible processing results. The proposed workflow makes use of novel integration methods as a means of making best use of the available geophysical data for fracture characterization. The methodology incorporates soft computing tools such as hybrid neural networks (neuro-evolutionary algorithms) as well as geostatistical simulation techniques to improve the property estimates as well as overall characterization efficacy. The basic elements of the proposed characterization workflow involves using seismic and microseismic data to characterize structural and geomechanical features within the subsurface. We use passive seismic data to model geomechanical properties which are combined with other properties evaluated from seismic and well logs to derive both qualitative and quantitative fracture zone identifiers. The study has resulted in a broad framework highlighting a new technique for utilizing geophysical data (seismic and microseismic) for unconventional reservoir characterization. It provides an opportunity to optimally develop the resources in question by incorporating data from different sources and using their temporal and spatial variability as a means to better understand the reservoir behavior. As part of this study, we have developed the following elements which are discussed in the subsequent chapters: 1. An integrated characterization framework for unconventional settings with adaptable workflows for all stages of data processing, interpretation and analysis. 2. A novel autopicking workflow for noisy passive seismic data used for improved accuracy in event picking as well as for improved velocity model building. 3. Improved passive seismic survey design optimization framework for better data collection and improved property estimation. 4. Extensive post-stack seismic attribute studies incorporating robust schemes applicable in complex reservoir settings. 5. Uncertainty quantification and analysis to better quantify property estimates over and above the qualitative interpretations made and to validate observations independently with quantified uncertainties to prevent erroneous interpretations. 6. Property mapping from microseismic data including stress and anisotropic weakness estimates for integrated reservoir characterization and analysis. 7. Integration of results (seismic, microseismic and well logs) from analysis of individual data sets for integrated interpretation using predefined integration framework and soft computing tools.
Intelligent seismic risk mitigation system on structure building
NASA Astrophysics Data System (ADS)
Suryanita, R.; Maizir, H.; Yuniorto, E.; Jingga, H.
2018-01-01
Indonesia located on the Pacific Ring of Fire, is one of the highest-risk seismic zone in the world. The strong ground motion might cause catastrophic collapse of the building which leads to casualties and property damages. Therefore, it is imperative to properly design the structural response of building against seismic hazard. Seismic-resistant building design process requires structural analysis to be performed to obtain the necessary building responses. However, the structural analysis could be very difficult and time consuming. This study aims to predict the structural response includes displacement, velocity, and acceleration of multi-storey building with the fixed floor plan using Artificial Neural Network (ANN) method based on the 2010 Indonesian seismic hazard map. By varying the building height, soil condition, and seismic location in 47 cities in Indonesia, 6345 data sets were obtained and fed into the ANN model for the learning process. The trained ANN can predict the displacement, velocity, and acceleration responses with up to 96% of predicted rate. The trained ANN architecture and weight factors were later used to build a simple tool in Visual Basic program which possesses the features for prediction of structural response as mentioned previously.
Shook, G. Michael; LeRoy, Samuel D.; Benzing, William M.
2006-07-18
Methods for determining the existence and characteristics of a gradational pressurized zone within a subterranean formation are disclosed. One embodiment involves employing an attenuation relationship between a seismic response signal and increasing wavelet wavelength, which relationship may be used to detect a gradational pressurized zone and/or determine characteristics thereof. In another embodiment, a method for analyzing data contained within a response signal for signal characteristics that may change in relation to the distance between an input signal source and the gradational pressurized zone is disclosed. In a further embodiment, the relationship between response signal wavelet frequency and comparative amplitude may be used to estimate an optimal wavelet wavelength or range of wavelengths used for data processing or input signal selection. Systems for seismic exploration and data analysis for practicing the above-mentioned method embodiments are also disclosed.
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2012-12-01
Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.
2010-09-01
method to ~ 4 Hz wave propagation using SAFOD borehole seismometers and the Parkfield Array Seismic Observatory (PASO) array (Thurber et al., 2004...limitations in mind, we apply our method to ~ 4 Hz wave propagation using SAFOD borehole seismometers and the Parkfield Array Seismic Observatory (PASO...Proposal No. BAA09-69 ABSTRACT Surface array and deep borehole recordings of chemical explosions in the near-source (0-20 km) region are studied to
Assessing criticality in seismicity by entropy
NASA Astrophysics Data System (ADS)
Goltz, C.
2003-04-01
There is an ongoing discussion whether the Earth's crust is in a critical state and whether this state is permanent or intermittent. Intermittent criticality would allow specification of time-dependent hazard in principle. Analysis of a spatio-temporally evolving synthetic critical point phenomenon and of real seismicity using configurational entropy shows that the method is a suitable approach for the characterisation of critical point dynamics. Results obtained rather support the notion of intermittent criticality in earthquakes. Statistical significance of the findings is assessed by the method of surrogate data.
NASA Astrophysics Data System (ADS)
Czaja, Klaudia; Matula, Rafal
2014-05-01
The paper presents analysis of the possibilities of application geophysical methods to investigation groundwater conditions. In this paper groundwater is defined as liquid water flowing through shallow aquifers. Groundwater conditions are described through the distribution of permeable layers (like sand, gravel, fractured rock) and impermeable or low-permeable layers (like clay, till, solid rock) in the subsurface. GPR (Ground Penetrating Radar), ERT(Electrical Resistivity Tomography), VES (Vertical Electric Soundings) and seismic reflection, refraction and MASW (Multichannel Analysis of Surface Waves) belong to non - invasive, surface, geophysical methods. Due to differences in physical parameters like dielectric constant, resistivity, density and elastic properties for saturated and saturated zones it is possible to use geophysical techniques for groundwater investigations. Few programmes for GPR, ERT, VES and seismic modelling were applied in order to verify and compare results. Models differ in values of physical parameters such as dielectric constant, electrical conductivity, P and S-wave velocity and the density, layers thickness and the depth of occurrence of the groundwater level. Obtained results for computer modelling for GPR and seismic methods and interpretation of test field measurements are presented. In all of this methods vertical resolution is the most important issue in groundwater investigations. This require proper measurement methodology e.g. antennas with frequencies high enough, Wenner array in electrical surveys, proper geometry for seismic studies. Seismic velocities of unconsolidated rocks like sand and gravel are strongly influenced by porosity and water saturation. No influence of water saturation degree on seismic velocities is observed below a value of about 90% water saturation. A further saturation increase leads to a strong increase of P-wave velocity and a slight decrease of S-wave velocity. But in case of few models only the relationship between differences in density and P-wave and S-wave velocity were observed. This is probably due to the way the modelling program calculates the wave field. Trace by trace should be analyzed during GPR interpretation, especially changes in signal amplitude. High permittivity of water results in higher permittivity of material and high reflection coefficient of electromagnetic wave. In case of electrical studies groundwater mineralization has the highest influence. When the layer thickness is small VES gives much better results than ERT.
Systems for low frequency seismic and infrasound detection of geo-pressure transition zones
Shook, G. Michael; LeRoy, Samuel D.; Benzing, William M.
2007-10-16
Methods for determining the existence and characteristics of a gradational pressurized zone within a subterranean formation are disclosed. One embodiment involves employing an attenuation relationship between a seismic response signal and increasing wavelet wavelength, which relationship may be used to detect a gradational pressurized zone and/or determine characteristics thereof. In another embodiment, a method for analyzing data contained within a response signal for signal characteristics that may change in relation to the distance between an input signal source and the gradational pressurized zone is disclosed. In a further embodiment, the relationship between response signal wavelet frequency and comparative amplitude may be used to estimate an optimal wavelet wavelength or range of wavelengths used for data processing or input signal selection. Systems for seismic exploration and data analysis for practicing the above-mentioned method embodiments are also disclosed.
NASA Astrophysics Data System (ADS)
Tsoflias, G. P.; Graham, B.; Haga, L.; Watney, L.
2017-12-01
The Mississippian in Kansas and Oklahoma is a highly heterogeneous, fractured, oil producing reservoir with thickness typically below seismic resolution. At Wellington field in south-central Kansas CO2 was injected in the Mississippian reservoir for enhanced oil recovery. This study examines the utility of active source surface seismic for characterization of Mississippian reservoir properties and monitoring CO2. Analysis of post-stack 3D seismic data showed the expected response of a gradational transition (ramp velocity) where thicker reservoir units corresponded with lower reflection amplitudes, lower frequency and a 90o phase change. Reflection amplitude could be correlated to reservoir thickness. Pre-stack gather analysis showed that porosity zones of the Mississippian reservoir exhibit characteristic AVO response. Simultaneous AVO inversion estimated P- and S-Impedances, which along with formation porosity logs and post-stack seismic data attributes were incorporated in multi-attribute linear-regression analysis and predicted reservoir porosity with an overall correlation of 0.90 to well data. The 3D survey gather azimuthal anisotropy analysis (AVAZ) provided information on the fault and fracture network and showed good agreement to the regional stress field and well data. Mississippian reservoir porosity and fracture predictions agreed well with the observed mobility of the CO2 in monitoring wells. Fluid substitution modeling predicted acoustic impedance reduction in the Mississippian carbonate reservoir introduced by the presence of CO2. Future work includes the assessment of time-lapse seismic, acquired after the injection of CO2. This work demonstrates that advanced seismic interpretation methods can be used successfully for characterization of the Mississippian reservoir and monitoring of CO2.
An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
Innovative Approaches for Seismic Studies of Mars (Invited)
NASA Astrophysics Data System (ADS)
Banerdt, B.
2010-12-01
In addition to its intrinsic interest, Mars is particularly well-suited for studying the full range of processes and phenomena related to early terrestrial planet evolution, from initial differentiation to the start of plate tectonics. It is large and complex enough to have undergone most of the processes that affected early Earth but, unlike the Earth, has apparently not undergone extensive plate tectonics or other major reworking that erased the imprint of early events (as evidenced by the presence of cratered surfaces older than 4 Ga). The martian mantle should have Earth-like polymorphic phase transitions and may even support a perovskite layer near the core (depending on the actual core radius), a characteristic that would have major implications for core cooling and mantle convection. Thus even the most basic measurements of planetary structure, such as crustal thickness, core radius and state (solid/liquid), and gross mantle velocity structure would provide invaluable constraints on models of early planetary evolution. Despite this strong scientific motivation (and several failed attempts), Mars remains terra incognita from a seismic standpoint. This is due to an unfortunate convergence of circumstances, prominent among which are our uncertainty in the level of seismic activity and the relatively high cost of landing multiple long-lived spacecraft on Mars to comprise a seismic network for body-wave travel-time analysis; typically four to ten stations are considered necessary for this type of experiment. In this presentation I will address both of these issues. In order to overcome the concern about a possible lack of marsquakes with which to work, it is useful to identify alternative methods for using seismic techniques to probe the interior. Seismology without quakes can be accomplished in a number of ways. “Unconventional” sources of seismic energy include meteorites (which strike the surface of Mars at a relatively high rate), artificial projectiles (which can supply up to 1010 J of kinetic energy), seismic “hum” from meteorological forcing, and tidal deformation from Phobos (with a period around 6 hours). Another means for encouraging a seismic mission to Mars is to promote methods that can derive interior information from a single seismometer. Fortunately many such methods exist, including source location through P-S and back-azimuth, receiver functions, identification of later phases (PcP, PKP, etc.), surface wave dispersion, and normal mode analysis (from single large events, stacked events, or background noise). Such methods could enable the first successful seismic investigation of another planet since the Apollo seismometers were turned off almost 35 years ago.
Methods for assessing the stability of slopes during earthquakes-A retrospective
Jibson, R.W.
2011-01-01
During the twentieth century, several methods to assess the stability of slopes during earthquakes were developed. Pseudostatic analysis was the earliest method; it involved simply adding a permanent body force representing the earthquake shaking to a static limit-equilibrium analysis. Stress-deformation analysis, a later development, involved much more complex modeling of slopes using a mesh in which the internal stresses and strains within elements are computed based on the applied external loads, including gravity and seismic loads. Stress-deformation analysis provided the most realistic model of slope behavior, but it is very complex and requires a high density of high-quality soil-property data as well as an accurate model of soil behavior. In 1965, Newmark developed a method that effectively bridges the gap between these two types of analysis. His sliding-block model is easy to apply and provides a useful index of co-seismic slope performance. Subsequent modifications to sliding-block analysis have made it applicable to a wider range of landslide types. Sliding-block analysis provides perhaps the greatest utility of all the types of analysis. It is far easier to apply than stress-deformation analysis, and it yields much more useful information than does pseudostatic analysis. ?? 2010.
A method of directly extracting multiwave angle-domain common-image gathers
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun
2017-10-01
Angle-domain common-image gathers (ADCIGs) can provide an effective way for migration velocity analysis and amplitude versus angle analysis in oil-gas seismic exploration. On the basis of multi-component Gaussian beam prestack depth migration (GB-PSDM), an alternative method of directly extracting multiwave ADCIGs is presented in this paper. We first introduce multi-component GB-PSDM, where a wavefield separation is proceeded to obtain the separated PP- and PS-wave seismic records before migration imaging for multiwave seismic data. Then, the principle of extracting PP- and PS-ADCIGs using GB-PSDM is presented. The propagation angle can be obtained using the real-value travel time of Gaussian beam in the course of GB-PSDM, which can be used to calculate the incidence and reflection angles. Two kinds of ADCIGs can be extracted for the PS-wave, one of which is P-wave incidence ADCIGs and the other one is S-wave reflection ADCIGs. In this paper, we use the incident angle to plot the ADCIGs for both PP- and PS-waves. Finally, tests of synthetic examples show that the method introduced here is accurate and effective.
Multimodal approach to seismic pavement testing
Ryden, N.; Park, C.B.; Ulriksen, P.; Miller, R.D.
2004-01-01
A multimodal approach to nondestructive seismic pavement testing is described. The presented approach is based on multichannel analysis of all types of seismic waves propagating along the surface of the pavement. The multichannel data acquisition method is replaced by multichannel simulation with one receiver. This method uses only one accelerometer-receiver and a light hammer-source, to generate a synthetic receiver array. This data acquisition technique is made possible through careful triggering of the source and results in such simplification of the technique that it is made generally available. Multiple dispersion curves are automatically and objectively extracted using the multichannel analysis of surface waves processing scheme, which is described. Resulting dispersion curves in the high frequency range match with theoretical Lamb waves in a free plate. At lower frequencies there are several branches of dispersion curves corresponding to the lower layers of different stiffness in the pavement system. The observed behavior of multimodal dispersion curves is in agreement with theory, which has been validated through both numerical modeling and the transfer matrix method, by solving for complex wave numbers. ?? ASCE / JUNE 2004.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasmal, S.; Chakrabarti, S. K.; S. N. Bose National Centre for Basic Sciences, JD Block, Salt-Lake Kolkata-70098
2010-10-20
The VLF (Very Low Frequency) signals are long thought to give away important information about the Lithosphere-Ionosphere coupling. It is recently established that the ionosphere may be perturbed due to seismic activities. The effects of this perturbation can be detected through the VLF wave amplitude. There are several methods to find this correlations and these methods can be used for the prediction of these seismic events. In this paper, first we present a brief history of the use of VLF propagation method for the study of seismo-ionospheric correlations. Then we present different methods proposed by us to find out themore » seismo-ionospheric correlations. At the Indian Centre for Space Physics, Kolkata we have been monitoring the VTX station at Vijayanarayanam from 2002. In the initial stage, we received 17 kHz signal and latter we received 18.2 kHz signal. In this paper, first we present the results for the 17 kHz signal during Sumatra earthquake in 2004 obtained from the terminator time analysis method. Then we present much detailed and statistical analysis using some new methods and present the results for 18.2 kHz signal. In order to establish the correlation between the ionospheric activities and the earthquakes, we need to understand what are the reference signals throughout the year. We present the result of the sunrise and sunset terminators for the 18.2 kHz signal as a function of the day of the year for a period of four years, viz, 2005 to 2008 when the solar activity was very low. In this case, the signal would primarily be affected by the Sun due to normal sunrise and sunset effects. Any deviation from this standardized calibration curve would point to influences by terrestrial (such as earthquakes) and extra-terrestrial (such as solar activities and other high energy phenomena). We present examples of deviations which occur in a period of sixteen months and show that the correlations with seismic events is significant and typically the highest deviation in terminator shift takes place up to a couple of days prior to the seismic event. We introduce a new method where we find the effects of the seismic activities on D-layer preparation time (DLPT) and the D-layer disappearance time (DLDT). We identify those days in which DLPT and DLDT exhibit deviations from the average value and we correlate those days with seismic events. Separately, we compute the energy release by the earthquakes and using this, we compute the total energy released locally from distant earthquakes and find correlations of the deviations with them. In this case also we find pre-cursors a few days before the seismic events. In a third approach, we consider the nighttime fluctuation method (differently quantified than the conventional way). We analyzed the nighttime data for the year 2007 to check the correlation between the night time fluctuation of the signal amplitude and the seismic events. Using the statistical method for all the events of the year and for the individual individual earthquakes (Magnitude > 5) we found that the night time signal amplitude becomes very high on three days prior to the seismic events.« less
High-Resolution Analysis of Seismicity Induced at Berlín Geothermal Field, El Salvador
NASA Astrophysics Data System (ADS)
Kwiatek, G.; Bulut, F.; Dresen, G. H.; Bohnhoff, M.
2012-12-01
We investigate induced microseismic activity monitored at Berlín Geothermal Field, El Salvador, during a hydraulic stimulation. The site was monitored for a time period of 17 months using thirteen 3-component seismic stations located in shallow boreholes. Three stimulations were performed in the well TR8A with a maximum injection rate and well head pressure of 160l/s and 130bar, respectively. For the entire time period of our analysis, the acquisition system recorded 581 events with moment magnitudes ranging between -0.5 and 3.7. The initial seismic catalog provided by the operator was substantially improved: 1) We re-picked P- and S-wave onsets and relocated the seismic events using the double-difference relocation algorithm based on cross-correlation derived differential arrival time data. Forward modeling was performed using a local 1D velocity model instead of homogeneous full-space. 2) We recalculated source parameters using the spectral fitting method and refined the results applying the spectral ratio method. We investigated the source parameters and spatial and temporal changes of the seismic activity based on the refined dataset and studied the correlation between seismic activity and production. The achieved hypocentral precision allowed resolving the spatiotemporal changes in seismic activity down to a scale of a few meters. The application of spectral ratio method significantly improved the quality of source parameters in a high-attenuating and complex geological environment. Of special interest is the largest event (Mw3.7) and its nucleation process. We investigate whether the refined seismic data display any signatures that the largest event is triggered by the shut-in of the well. We found seismic activity displaying clear spatial and temporal patterns that could be easily related to the amount of water injected into the well TR8A and other reinjection wells in the investigated area. The migration of seismicity outside of injection point is observed while injection rate is increasing. The locations of migrating seismic events are related to the existing fault system that is independently supported by calculated focal mechanisms. We found that the event migration occurs until the shut-in of the well. We observe that the large magnitude events are observed right after the shut-in, located in undamaged parts of the fault system. Results show that the following stimulation episodes require increased injection rate level (or increased well head pressure) to re-activate the seismic activity (Kaiser Effect, "Crustal memory" effect). The static stress drop values increase with the distance from injection point that is interpreted to be related to pore pressure perturbations introduced by stimulation of the injection well.
CORSSA: Community Online Resource for Statistical Seismicity Analysis
NASA Astrophysics Data System (ADS)
Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.
2011-12-01
Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.
Spatial organization of seismicity and fracture pattern at the boundary between Alps and Dinarides
NASA Astrophysics Data System (ADS)
Bressan, Gianni; Ponton, Maurizio; Rossi, Giuliana; Urban, Sandro
2016-04-01
The paper affords the study of the spatial organization of seismicity in the easternmost region of the Alps (Friuli, in NE Italy and W Slovenia), dominated by the interference between the Alpine and the Dinaric tectonic systems. Two non-conventional methods of spatial analysis are used: fractal analysis and principal component analysis (PCA). The fractal analysis helps to discriminate the cases in which hypocentres clearly define a plane, from the ones in which hypocenter distribution tends to the planarity, without reaching it. The PCA analysis is used to infer the orientation of planes fitting through earthquake foci, or the direction of propagation of the hypocentres. Furthermore, we study the spatial seismicity pattern at the shallow depths in the context of a general damage model, through the crack density distribution. The results of the three methods concur to a complex and composite model of fracturing in the region. The hypocentre pattern fills only partially a plane, i.e. has a fractal dimension close to 2. The three exceptions regard planes with Dinaric trend, without interference with Alpine lineaments. The shallowest depth range (0-10 km depth) is characterized by the activation of planes with variable orientations, reflecting the interference between the Dinaric and the Alpine tectonic structures, and closely bound to the variation of the mechanical properties of the crust. The seismicity occurs mostly in areas characterized by a variation from low to moderate crack density, indicating the sharp transition from zones of low damage to zones of moderate damage. Low crack density indicates the presence of more competent rocks capable of sustaining high strain energy while high crack density areas pertain to highly fractured rocks that cannot store high strain energy. Brittle failure, i.e. seismic activity, is favoured within the sharp transitions from low to moderate crack density zones. The orientation of the planes depicting the seismic activity, indeed, coincides with the orientation of the faults generated along the flanks of past carbonate platforms both in Friuli and western Slovenia. In the deepest depth range (10-20-km depth), on the contrary, the study evidences the dominance of the tectonic Dinaric system to the NW of the External Dinarides, in depth. This depth interval is characterized by a more organized pattern of seismicity. Seismic events mainly locate on the Dinaric lineaments in the northern and eastern parts of the region considered, while on Alpine thrusts in the western and southern parts.
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
Seismic hazard assessment of Syria using seismicity, DEM, slope, active tectonic and GIS
NASA Astrophysics Data System (ADS)
Ahmad, Raed; Adris, Ahmad; Singh, Ramesh
2016-07-01
In the present work, we discuss the use of an integrated remote sensing and Geographical Information System (GIS) techniques for evaluation of seismic hazard areas in Syria. The present study is the first time effort to create seismic hazard map with the help of GIS. In the proposed approach, we have used Aster satellite data, digital elevation data (30 m resolution), earthquake data, and active tectonic maps. Many important factors for evaluation of seismic hazard were identified and corresponding thematic data layers (past earthquake epicenters, active faults, digital elevation model, and slope) were generated. A numerical rating scheme has been developed for spatial data analysis using GIS to identify ranking of parameters to be included in the evaluation of seismic hazard. The resulting earthquake potential map delineates the area into different relative susceptibility classes: high, moderate, low and very low. The potential earthquake map was validated by correlating the obtained different classes with the local probability that produced using conventional analysis of observed earthquakes. Using earthquake data of Syria and the peak ground acceleration (PGA) data is introduced to the model to develop final seismic hazard map based on Gutenberg-Richter (a and b values) parameters and using the concepts of local probability and recurrence time. The application of the proposed technique in Syrian region indicates that this method provides good estimate of seismic hazard map compared to those developed from traditional techniques (Deterministic (DSHA) and probabilistic seismic hazard (PSHA). For the first time we have used numerous parameters using remote sensing and GIS in preparation of seismic hazard map which is found to be very realistic.
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Remote geologic structural analysis of Yucca Flat
NASA Astrophysics Data System (ADS)
Foley, M. G.; Heasler, P. G.; Hoover, K. A.; Rynes, N. J.; Thiessen, R. L.; Alfaro, J. L.
1991-12-01
The Remote Geologic Analysis (RGA) system was developed by Pacific Northwest Laboratory (PNL) to identify crustal structures that may affect seismic wave propagation from nuclear tests. Using automated methods, the RGA system identifies all valleys in a digital elevation model (DEM), fits three-dimensional vectors to valley bottoms, and catalogs all potential fracture or fault planes defined by coplanar pairs of valley vectors. The system generates a cluster hierarchy of planar features having greater-than-random density that may represent areas of anomalous topography manifesting structural control of erosional drainage development. Because RGA uses computer methods to identify zones of hypothesized control of topography, ground truth using a well-characterized test site was critical in our evaluation of RGA's characterization of inaccessible test sites for seismic verification studies. Therefore, we applied RGA to a study area centered on Yucca Flat at the Nevada Test Site (NTS) and compared our results with both mapped geology and geologic structures and with seismic yield-magnitude models. This is the final report of PNL's RGA development project for peer review within the U.S. Department of Energy Office of Arms Control (OAC) seismic-verification community. In this report, we discuss the Yucca Flat study area, the analytical basis of the RGA system and its application to Yucca Flat, the results of the analysis, and the relation of the analytical results to known topography, geology, and geologic structures.
Seismo-acoustic analysis of the near quarry blasts using Plostina small aperture array
NASA Astrophysics Data System (ADS)
Ghica, Daniela; Stancu, Iulian; Ionescu, Constantin
2013-04-01
Seismic and acoustic signals are important to recognize different type of industrial blasting sources in order to discriminate between them and natural earthquakes. We have analyzed the seismic events listed in the Romanian catalogue (Romplus) for the time interval between 2011 and 2012, and occurred in the Dobrogea region, in order to determine detection seismo-acoustic signals of quarry blasts by Plostina array stations. Dobrogea is known as a seismic region characterized by crustal earthquakes with low magnitudes; at the same time, over 40 quarry mines are located in the area, being sources of blasts recorded both with the seismic and infrasound sensors of the Romanian Seismic Network. Plostina seismo-acoustic array, deployed in the central part of Romania, consists of 7 seismic sites (3C broad-band instruments and accelerometers) collocated with 7 infrasound instruments. The array is particularly used for the seismic monitoring of the local and regional events, as well as for the detection of infrasonic signals produced by various sources. Considering the characteristics of the infrasound sensors (frequency range, dynamic, sensibility), the array proved its efficiency in observing the signals produced by explosions, mine explosion and quarry blasts. The quarry mines included for this study cover distances of two hundreds of kilometers from the station and routinely generate explosions that are detected as seismic and infrasonic signals with Plostina array. The combined seismo-acoustic analysis uses two types of detectors for signal identification: one, applied for the seismic signal identification, is based on array processing techniques (beamforming and frequency-wave number analysis), while the other one, which is used for infrasound detection and characterization, is the automatic detector DFX-PMCC (Progressive Multi-Channel Correlation Method). Infrasonic waves generated by quarry blasts have frequencies ranging from 0.05 Hz up to at least 6 Hz and amplitudes below 5 Pa. Seismic data analysis shows that the frequency range of the signals are above 2 Hz. Surface explosions such as quarry blasts are useful sources for checking detection and location efficiency, when seismic measurements are added. The process is crucial for discrimination purposes and for establishing of a set of ground-truth infrasound events. Ground truth information plays a key role in the interpretation of infrasound signals, by including near-field observations from industrial blasts.
A Parametric Study of Nonlinear Seismic Response Analysis of Transmission Line Structures
Wang, Yanming; Yi, Zhenhua
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
NASA Astrophysics Data System (ADS)
Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter
2008-07-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.
Signal Quality and the Reliability of Seismic Observations
NASA Astrophysics Data System (ADS)
Zeiler, C. P.; Velasco, A. A.; Pingitore, N. E.
2009-12-01
The ability to detect, time and measure seismic phases depends on the location, size, and quality of the recorded signals. Additional constraints are an analyst’s familiarity with a seismogenic zone and with the seismic stations that record the energy. Quantification and qualification of an analyst’s ability to detect, time and measure seismic signals has not been calculated or fully assessed. The fundamental measurement for computing the accuracy of a seismic measurement is the signal quality. Several methods have been proposed to measure signal quality; however, the signal-to-noise ratio (SNR) has been adopted as a short-term average over the long-term average. While the standard SNR is an easy and computationally inexpensive term, the overall statistical significance has not been computed for seismic measurement analysis. The prospect of canonizing the process of cataloging seismic arrivals hinges on the ability to repeat measurements made by different methods and analysts. The first step in canonizing phase measurements has been done by the IASPEI, which established a reference for accepted practices in naming seismic phases. The New Manual for Seismological Observatory Practices (NMSOP, 2002) outlines key observations for seismic phases recorded at different distances and proposes to quantify timing uncertainty with a user-specified windowing technique. However, this added measurement would not completely remove bias introduced by different techniques used by analysts to time seismic arrivals. The general guideline to time a seismic arrival is to record the time where a noted change in frequency and/or amplitude begins. This is generally achieved by enhancing the arrivals through filtering or beam forming. However, these enhancements can alter the characteristics of the arrival and how the arrival will be measured. Furthermore, each enhancement has user-specified parameters that can vary between analysts and this results in reduced ability to repeat measurements between analysts. The SPEAR project (Zeiler and Velasco, 2009) has started to explore the effects of comparing measurements from the same seismograms. Initial results showed that experience and the signal quality are the leading contributors to pick differences. However, the traditional SNR method of measuring signal quality was replaced by a Wide-band Spectral Ratio (WSR) due to a decrease in scatter. This observation brings up an important question of what is the best way to measure signal quality. We compare various methods (traditional SNR, WSR, power spectral density plots, Allan Variance) that have been proposed to measure signal quality and discuss which method provides the best tool to compare arrival time uncertainty.
NASA Astrophysics Data System (ADS)
Trevisani, Sebastiano; Rocca, Michele; Boaga, Jacopo
2014-05-01
This presentation aims to outline the preliminary findings related to an extensive seismic survey conducted in the historical center of Venice, Italy. The survey was conducted via noninvasive and low-cost seismic techniques based on surface waves analysis and microtremor methods, mainly using single station horizontal to vertical spectral ratio techninques (HVSR) and multichannel analysis of surface waves in passive (ReMI) and active (MASW) configurations. The importance and the fragility of the cultural heritage of Venice, coupled with its peculiar geological and geotechnical characteristics, stress the importance of a good knowledge of its geological architecture and seismic characteristics as an opportunity to improve restoration and conservation planning. Even if Venice is located in a relatively low seismic hazard zone, a local characterization of soil resonance frequencies and surficial shear waves velocities could improve the planning of engineering interventions, furnishing important information on possible local effects related to seismic amplification and possible coupling within buildings and soil resonance frequencies. In the specific we collected more than 50 HVSR single station noise measurements and several passive and active multichannel analysis of surface waves located in the historical center. In this work we report the characteristics of the conducted seismic surveys (instrumentation, sampling geometry, etc.) and the preliminary findings of our analysis. Moreover, we discuss briefly the practical issues, mainly of logistic nature, of conducting this kind of surveys in a peculiar and crowed historical center as represented by Venice urban contest. Acknowledgments Instrumentation acquired in relation to the project co-financed by Regione Veneto, POR-CRO, FESR, 2007-2013, action 1.1.1. "Supporto ad attività di ricerca, processi e reti di innovazione e alla creazione di imprese in settori a elevato contenuto tecnologico"
Open Source Tools for Seismicity Analysis
NASA Astrophysics Data System (ADS)
Powers, P.
2010-12-01
The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.
NASA Astrophysics Data System (ADS)
Formisano, Antonio; Ciccone, Giuseppe; Mele, Annalisa
2017-11-01
This paper investigates about the seismic vulnerability and risk of fifteen masonry churches located in the historical centre of Naples. The used analysis method is derived from a procedure already implemented by the University of Basilicata on the churches of Matera. In order to evaluate for the study area the seismic vulnerability and hazard indexes of selected churches, the use of appropriate technical survey forms is done. Data obtained from applying the employed procedure allow for both plotting of vulnerability maps and providing seismic risk indicators of all churches. The comparison among the indexes achieved allows for the evaluation of the health state of inspected churches so to program a priority scale in performing future retrofitting interventions.
Using T-Z plots as a graphical method to infer lithological variations from growth strata
NASA Astrophysics Data System (ADS)
Castelltort, Sébastien; Pochat, Stéphane; Van Den Driessche, Jean
2004-08-01
The 'T-Z plot' method consists of plotting the throw of sedimentary horizons across a growth fault versus their depth in the hanging wall. This method has been initially developed for the analysis of growth fault kinematics from seismic data. A brief analytical examination of such plots shows that they can also provide valuable information about the evolution of fault topography. When growth is a continuous process, stages of topography creation (fault scarp) and filling (of the space available in the hanging-wall) are related to non-dynamic (draping, mud-prone pelagic settling) and dynamic (sand-prone, dynamically deposited) sedimentation, respectively. In this case, the T-Z plot analysis becomes a powerful tool to predict major lithological variations on seismic profiles in faulted settings.
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays
NASA Astrophysics Data System (ADS)
Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.
2014-12-01
Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.
NASA Astrophysics Data System (ADS)
Sollberger, David; Schmelzbach, Cedric; Robertsson, Johan O. A.; Greenhalgh, Stewart A.; Nakamura, Yosio; Khan, Amir
2016-04-01
We present a new seismic velocity model of the shallow lunar crust, including, for the first time, shear wave velocity information. So far, the shear wave velocity structure of the lunar near-surface was effectively unconstrained due to the complexity of lunar seismograms. Intense scattering and low attenuation in the lunar crust lead to characteristic long-duration reverberations on the seismograms. The reverberations obscure later arriving shear waves and mode conversions, rendering them impossible to identify and analyze. Additionally, only vertical component data were recorded during the Apollo active seismic experiments, which further compromises the identification of shear waves. We applied a novel processing and analysis technique to the data of the Apollo 17 lunar seismic profiling experiment (LSPE), which involved recording seismic energy generated by several explosive packages on a small areal array of four vertical component geophones. Our approach is based on the analysis of the spatial gradients of the seismic wavefield and yields key parameters such as apparent phase velocity and rotational ground motion as a function of time (depth), which cannot be obtained through conventional seismic data analysis. These new observables significantly enhance the data for interpretation of the recorded seismic wavefield and allow, for example, for the identification of S wave arrivals based on their lower apparent phase velocities and distinct higher amount of generated rotational motion relative to compressional (P-) waves. Using our methodology, we successfully identified pure-mode and mode-converted refracted shear wave arrivals in the complex LSPE data and derived a P- and S-wave velocity model of the shallow lunar crust at the Apollo 17 landing site. The extracted elastic-parameter model supports the current understanding of the lunar near-surface structure, suggesting a thin layer of low-velocity lunar regolith overlying a heavily fractured crust of basaltic material showing high (>0.4 down to 60 m) Poisson's ratios. Our new model can be used in future studies to better constrain the deep interior of the Moon. Given the rich information derived from the minimalistic recording configuration, our results demonstrate that wavefield gradient analysis should be critically considered for future space missions that aim to explore the interior structure of extraterrestrial objects by seismic methods. Additionally, we anticipate that the proposed shear wave identification methodology can also be applied to the routinely recorded vertical component data from land seismic exploration on Earth.
State of art of seismic design and seismic hazard analysis for oil and gas pipeline system
NASA Astrophysics Data System (ADS)
Liu, Aiwen; Chen, Kun; Wu, Jian
2010-06-01
The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.
Multi-azimuth 3D Seismic Exploration and Processing in the Jeju Basin, the Northern East China Sea
NASA Astrophysics Data System (ADS)
Yoon, Youngho; Kang, Moohee; Kim, Jin-Ho; Kim, Kyong-O.
2015-04-01
Multi-azimuth(MAZ) 3D seismic exploration is one of the most advanced seismic survey methods to improve illumination and multiple attenuation for better image of the subsurface structures. 3D multi-channel seismic data were collected in two phases during 2012, 2013, and 2014 in Jeju Basin, the northern part of the East China Sea Basin where several oil and gas fields were discovered. Phase 1 data were acquired at 135° and 315° azimuths in 2012 and 2013 comprised a full 3D marine seismic coverage of 160 km2. In 2014, phase 2 data were acquired at the azimuths 45° and 225°, perpendicular to those of phase 1. These two datasets were processed through the same processing workflow prior to velocity analysis and merged to one MAZ dataset. We performed velocity analysis on the MAZ dataset as well as two phases data individually and then stacked these three datasets separately. We were able to pick more accurate velocities in the MAZ dataset compare to phase 1 and 2 data while velocity picking. Consequently, the MAZ seismic volume provide us better resolution and improved images since different shooting directions illuminate different parts of the structures and stratigraphic features.
Improvement of Epicentral Direction Estimation by P-wave Polarization Analysis
NASA Astrophysics Data System (ADS)
Oshima, Mitsutaka
2016-04-01
Polarization analysis has been used to analyze the polarization characteristics of waves and developed in various spheres, for example, electromagnetics, optics, and seismology. As for seismology, polarization analysis is used to discriminate seismic phases or to enhance specific phase (e.g., Flinn, 1965)[1], by taking advantage of the difference in polarization characteristics of seismic phases. In earthquake early warning, polarization analysis is used to estimate the epicentral direction using single station, based on the polarization direction of P-wave portion in seismic records (e.g., Smart and Sproules(1981) [2], Noda et al.,(2012) [3]). Therefore, improvement of the Estimation of Epicentral Direction by Polarization Analysis (EEDPA) directly leads to enhance the accuracy and promptness of earthquake early warning. In this study, the author tried to improve EEDPA by using seismic records of events occurred around Japan from 2003 to 2013. The author selected the events that satisfy following conditions. MJMA larger than 6.5 (JMA: Japan Meteorological Agency). Seismic records are available at least 3 stations within 300km in epicentral distance. Seismic records obtained at stations with no information on seismometer orientation were excluded, so that precise and quantitative evaluation of accuracy of EEDPA becomes possible. In the analysis, polarization has calculated by Vidale(1986) [4] that extended the method proposed by Montalbetti and Kanasewich(1970)[5] to use analytical signal. As a result of the analysis, the author found that accuracy of EEDPA improves by about 15% if velocity records, not displacement records, are used contrary to the author's expectation. Use of velocity records enables reduction of CPU time in integration of seismic records and improvement in promptness of EEDPA, although this analysis is still rough and further scrutiny is essential. At this moment, the author used seismic records that obtained by simply integrating acceleration records and applied no filtering. Further study on optimal type of filter and its application frequency band is necessary. In poster presentation, the results of aforementioned study shall be shown. [1] Flinn, E. A. (1965) , Signal analysis using rectilinearity and direction of particle motion. Proceedings of the IEEE, 53(12), 1874-1876. [2] Smart, E., & Sproules, H. (1981), Regional phase processors (No. SDAC-TR-81-1). TELEDYNE GEOTECH ALEXANDRIA VA SEISMIC DATA ANALYSIS CENTER. [3] Noda, S., Yamamoto, S., Sato, S., Iwata, N., Korenaga, M., & Ashiya, K. (2012). Improvement of back-azimuth estimation in real-time by using a single station record. Earth, planets and space, 64(3), 305-308. [4] Vidale, J. E. (1986). Complex polarization analysis of particle motion. Bulletin of the Seismological society of America, 76(5), 1393-1405. [5] Montalbetti, J. F., & Kanasewich, E. R. (1970). Enhancement of teleseismic body phases with a polarization filter. Geophysical Journal International, 21(2), 119-129.
Online monitoring of seismic damage in water distribution systems
NASA Astrophysics Data System (ADS)
Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei
2004-07-01
It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.
NASA Astrophysics Data System (ADS)
Dey, Joyjit; Perumal, R. Jayangonda; Sarkar, Subham; Bhowmik, Anamitra
2017-08-01
In the NW Sub-Himalayan frontal thrust belt in India, seismic interpretation of subsurface geometry of the Kangra and Dehradun re-entrant mismatch with the previously proposed models. These procedures lack direct quantitative measurement on the seismic profile required for subsurface structural architecture. Here we use a predictive angular function for establishing quantitative geometric relationships between fault and fold shapes with `Distance-displacement method' (D-d method). It is a prognostic straightforward mechanism to probe the possible structural network from a seismic profile. Two seismic profiles Kangra-2 and Kangra-4 of Kangra re-entrant, Himachal Pradesh (India), are investigated for the fault-related folds associated with the Balh and Paror anticlines. For Paror anticline, the final cut-off angle β =35{°} was obtained by transforming the seismic time profile into depth profile to corroborate the interpreted structures. Also, the estimated shortening along the Jawalamukhi Thrust and Jhor Fault, lying between the Himalayan Frontal Thrust (HFT) and the Main Boundary Thrust (MBT) in the frontal fold-thrust belt, were found to be 6.06 and 0.25 km, respectively. Lastly, the geometric method of fold-fault relationship has been exercised to document the existence of a fault-bend fold above the Himalayan Frontal Thrust (HFT). Measurement of shortening along the fault plane is employed as an ancillary tool to prove the multi-bending geometry of the blind thrust of the Dehradun re-entrant.
NASA Astrophysics Data System (ADS)
Nanjo, K.; Izutsu, J.; Orihara, Y.; Furuse, N.; Togo, S.; Nitta, H.; Okada, T.; Tanaka, R.; Kamogawa, M.; Nagao, T.
2016-12-01
We show the first results of recognizing seismic patterns as possible precursory episodes to the 2016 Kumamoto earthquakes, using existing four different methods: b-value method (e.g., Schorlemmer and Wiemer, 2005; Nanjo et al., 2012), two kinds of seismic quiescence evaluation methods (RTM-algorithm, Nagao et al., 2011; Z-value method, Wiemer and Wyss, 1994), and foreshock seismic density analysis based on Lippiello et al. (2012). We used the earthquake catalog maintained by the Japan Meteorological Agency (JMA). To ensure data quality, we performed catalog completeness check as a pre-processing step of individual analyses. Our finding indicates the methods we adopted do not allow the Kumamoto earthquakes to be predicted exactly. However, we found that the spatial extent of possible precursory patterns differs from one method to the other and ranges from local scales (typically asperity size), to regional scales (e.g., 2° × 3° around the source zone). The earthquakes are preceded by periods of pronounced anomalies, which lasted decade scales (e.g., 20 years or longer) to yearly scales (e.g., 1 2 years). Our results demonstrate that combination of multiple methods detects different signals prior to the Kumamoto earthquakes with more considerable reliability than if measured by single method. This strongly suggests great potential to reduce the possible future sites of earthquakes relative to long-term seismic hazard assessment. This study was partly supported by MEXT under its Earthquake and Volcano Hazards Observation and Research Program and Grant-in-Aid for Scientific Research (C), No. 26350483, 2014-2017, by Chubu University under the Collaboration Research Program of IDEAS, IDEAS201614, and by Tokai University under Project Resarch of IORD. A part of this presentation is given in Nanjo et al. (2016, submitted).
NASA Astrophysics Data System (ADS)
Panzera, Francesco; Lombardo, Giuseppe; Rigano, Rosaria
2010-05-01
The seismic hazard assessment (SHA) can be performed using either Deterministic or Probabilistic approaches. In present study a probabilistic analysis was carried out for the Catania and Siracusa towns using two different procedures: the 'site' (Albarello and Mucciarelli, 2002) and the 'seismotectonic' (Cornell 1968; Esteva, 1967) methodologies. The SASHA code (D'Amico and Albarello, 2007) was used to calculate seismic hazard through the 'site' approach, whereas the CRISIS2007 code (Ordaz et al., 2007) was adopted in the Esteva-Cornell procedure. According to current international conventions for PSHA (SSHAC, 1997), a logic tree approach was followed to consider and reduce the epistemic uncertainties, for both seismotectonic and site methods. The code SASHA handles the intensity data taking into account the macroseismic information of past earthquakes. CRISIS2007 code needs, as input elements, a seismic catalogue tested for completeness, a seismogenetic zonation and ground motion predicting equations. Data concerning the characterization of regional seismic sources and ground motion attenuation properties were taken from the literature. Special care was devoted to define source zone models, taking into account the most recent studies on regional seismotectonic features and, in particular, the possibility of considering the Malta escarpment as a potential source. The combined use of the above mentioned approaches allowed us to obtain useful elements to define the site seismic hazard in Catania and Siracusa. The results point out that the choice of the probabilistic model plays a fundamental role. It is indeed observed that when the site intensity data are used, the town of Catania shows hazard values higher than the ones found for Siracusa, for each considered return period. On the contrary, when the Esteva-Cornell method is used, Siracusa urban area shows higher hazard than Catania, for return periods greater than one hundred years. The higher hazard observed, through the site approach, for Catania area can be interpreted in terms of greater damage historically observed at this town and its smaller distance from the seismogenic structures. On the other hand, the higher level of hazard found for Siracusa, throughout the Esteva-Cornell approach, could be a consequence of the features of such method which spreads out the intensities over a wide area. However, in SHA the use of a combined approach is recommended for a mutual validation of obtained results and any choice between the two approaches is strictly linked to the knowledge of the local seismotectonic features. References Albarello D. and Mucciarelli M.; 2002: Seismic hazard estimates using ill?defined macroseismic data at site. Pure Appl. Geophys., 159, 1289?1304. Cornell C.A.; 1968: Engineering seismic risk analysis. Bull. Seism. Soc. Am., 58(5), 1583-1606. D'Amico V. and Albarello D.; 2007: Codice per il calcolo della pericolosità sismica da dati di sito (freeware). Progetto DPC-INGV S1, http://esse1.mi.ingv.it/d12.html Esteva L.; 1967: Criterios para la construcción de espectros para diseño sísmico. Proceedings of XII Jornadas Sudamericanas de Ingeniería Estructural y III Simposio Panamericano de Estructuras, Caracas, 1967. Published later in Boletín del Instituto de Materiales y Modelos Estructurales, Universidad Central de Venezuela, No. 19. Ordaz M., Aguilar A. and Arboleda J.; 2007: CRISIS2007, Program for computing seismic hazard. Version 5.4, Mexico City: UNAM. SSHAC (Senior Seismic Hazard Analysis Committee); 1997: Recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and use of experts. NUREG/CR-6372.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
Applying Binary Forecasting Approaches to Induced Seismicity in the Western Canada Sedimentary Basin
NASA Astrophysics Data System (ADS)
Kahue, R.; Shcherbakov, R.
2016-12-01
The Western Canada Sedimentary Basin has been chosen as a focus due to an increase in the recent observed seismicity there which is most likely linked to anthropogenic activities related to unconventional oil and gas exploration. Seismicity caused by these types of activities is called induced seismicity. The occurrence of moderate to larger induced earthquakes in areas where critical infrastructure is present can be potentially problematic. Here we use a binary forecast method to analyze past seismicity and well production data in order to quantify future areas of increased seismicity. This method splits the given region into spatial cells. The binary forecast method used here has been suggested in the past to retroactively forecast large earthquakes occurring globally in areas called alarm cells. An alarm cell, or alert zone, is a bin in which there is a higher likelihood for earthquakes to occur based on previous data. The first method utilizes the cumulative Benioff strain, based on earthquakes that had occurred in each bin above a given magnitude over a time interval called the training period. The second method utilizes the cumulative well production data within each bin. Earthquakes that occurred within an alert zone in the retrospective forecast period contribute to the hit rate, while alert zones that did not have an earthquake occur within them in the forecast period contribute to the false alarm rate. In the resulting analysis the hit rate and false alarm rate are determined after optimizing and modifying the initial parameters using the receiver operating characteristic diagram. It is found that when modifying the cell size and threshold magnitude parameters within various training periods, hit and false alarm rates are obtained for specific regions in Western Canada using both recent seismicity and cumulative well production data. Certain areas are thus shown to be more prone to potential larger earthquakes based on both datasets. This has implications for the potential link between oil and gas production and induced seismicity observed in the Western Canada Sedimentary Basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less
A modified symplectic PRK scheme for seismic wave modeling
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Ma, Jian
2017-02-01
A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.
Fluid-structure interaction in fast breeder reactors
NASA Astrophysics Data System (ADS)
Mitra, A. A.; Manik, D. N.; Chellapandi, P. A.
2004-05-01
A finite element model for the seismic analysis of a scaled down model of Fast breeder reactor (FBR) main vessel is proposed to be established. The reactor vessel, which is a large shell structure with a relatively thin wall, contains a large volume of sodium coolant. Therefore, the fluid structure interaction effects must be taken into account in the seismic design. As part of studying fluid-structure interaction, the fundamental frequency of vibration of a circular cylindrical shell partially filled with a liquid has been estimated using Rayleigh's method. The bulging and sloshing frequencies of the first four modes of the aforementioned system have been estimated using the Rayleigh-Ritz method. The finite element formulation of the axisymmetric fluid element with Fourier option (required due to seismic loading) is also presented.
Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.
2015-01-01
The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after further consideration of the reliability and scientific acceptability of each alternative input model. Forecasting the seismic hazard from induced earthquakes is fundamentally different from forecasting the seismic hazard for natural, tectonic earthquakes. This is because the spatio-temporal patterns of induced earthquakes are reliant on economic forces and public policy decisions regarding extraction and injection of fluids. As such, the rates of induced earthquakes are inherently variable and nonstationary. Therefore, we only make maps based on an annual rate of exceedance rather than the 50-year rates calculated for previous U.S. Geological Survey hazard maps.
Seismic and Aseismic Slip on the Cascadia Megathrust
NASA Astrophysics Data System (ADS)
Michel, S. G. R. M.; Gualandi, A.; Avouac, J. P.
2017-12-01
Our understanding of the dynamics governing aseismic and seismic slip hinges on our ability to image the time evolution of fault slip during and in between earthquakes and transients. Such kinematic descriptions are also pivotal to assess seismic hazard as, on the long term, elastic strain accumulating around a fault should be balanced by elastic strain released by seismic slip and aseismic transients. In this presentation, we will discuss how such kinematic descriptions can be obtained from the analysis and modelling of geodetic time series. We will use inversion methods based on Independent Component Analysis (ICA) decomposition of the time series to extract and model the aseismic slip (afterslip and slow slip events). We will show that this approach is very effective to identify, and filter out, non-tectonic sources of geodetic strain such as the strain due to surface loads, which can be estimated using gravimetric measurements from GRACE, and thermal strain. We will discuss in particular the application to the Cascadia subduction zone.
James, S. R.; Knox, H. A.; Abbott, R. E.; ...
2017-04-13
Cross correlations of seismic noise can potentially record large changes in subsurface velocity due to permafrost dynamics and be valuable for long-term Arctic monitoring. We applied seismic interferometry, using moving window cross-spectral analysis (MWCS), to 2 years of ambient noise data recorded in central Alaska to investigate whether seismic noise could be used to quantify relative velocity changes due to seasonal active-layer dynamics. The large velocity changes (>75%) between frozen and thawed soil caused prevalent cycle-skipping which made the method unusable in this setting. We developed an improved MWCS procedure which uses a moving reference to measure daily velocity variationsmore » that are then accumulated to recover the full seasonal change. This approach reduced cycle-skipping and recovered a seasonal trend that corresponded well with the timing of active-layer freeze and thaw. Lastly, this improvement opens the possibility of measuring large velocity changes by using MWCS and permafrost monitoring by using ambient noise.« less
Kinematic Seismic Rupture Parameters from a Doppler Analysis
NASA Astrophysics Data System (ADS)
Caldeira, Bento; Bezzeghoud, Mourad; Borges, José F.
2010-05-01
The radiation emitted from extended seismic sources, mainly when the rupture spreads in preferred directions, presents spectral deviations as a function of the observation location. This aspect, unobserved to point sources, and named as directivity, are manifested by an increase in the frequency and amplitude of seismic waves when the rupture occurs in the direction of the seismic station and a decrease in the frequency and amplitude if it occurs in the opposite direction. The model of directivity that supports the method is a Doppler analysis based on a kinematic source model of rupture and wave propagation through a structural medium with spherical symmetry [1]. A unilateral rupture can be viewed as a sequence of shocks produced along certain paths on the fault. According this model, the seismic record at any point on the Earth's surface contains a signature of the rupture process that originated the recorded waveform. Calculating the rupture direction and velocity by a general Doppler equation, - the goal of this work - using a dataset of common time-delays read from waveforms recorded at different distances around the epicenter, requires the normalization of measures to a standard value of slowness. This normalization involves a non-linear inversion that we solve numerically using an iterative least-squares approach. The evaluation of the performance of this technique was done through a set of synthetic and real applications. We present the application of the method at four real case studies, the following earthquakes: Arequipa, Peru (Mw = 8.4, June 23, 2001); Denali, AK, USA (Mw = 7.8; November 3, 2002); Zemmouri-Boumerdes, Algeria (Mw = 6.8, May 21, 2003); and Sumatra, Indonesia (Mw = 9.3, December 26, 2004). The results obtained from the dataset of the four earthquakes agreed, in general, with the values presented by other authors using different methods and data. [1] Caldeira B., Bezzeghoud M, Borges JF, 2009; DIRDOP: a directivity approach to determining the seismic rupture velocity vector. J Seismology, DOI 10.1007/s10950-009-9183-x
Evaluation of seismic performance of reinforced concrete (RC) buildings under near-field earthquakes
NASA Astrophysics Data System (ADS)
Moniri, Hassan
2017-03-01
Near-field ground motions are significantly severely affected on seismic response of structure compared with far-field ground motions, and the reason is that the near-source forward directivity ground motions contain pulse-long periods. Therefore, the cumulative effects of far-fault records are minor. The damage and collapse of engineering structures observed in the last decades' earthquakes show the potential of damage in existing structures under near-field ground motions. One important subject studied by earthquake engineers as part of a performance-based approach is the determination of demand and collapse capacity under near-field earthquake. Different methods for evaluating seismic structural performance have been suggested along with and as part of the development of performance-based earthquake engineering. This study investigated the results of illustrious characteristics of near-fault ground motions on the seismic response of reinforced concrete (RC) structures, by the use of Incremental Nonlinear Dynamic Analysis (IDA) method. Due to the fact that various ground motions result in different intensity-versus-response plots, this analysis is done again under various ground motions in order to achieve significant statistical averages. The OpenSees software was used to conduct nonlinear structural evaluations. Numerical modelling showed that near-source outcomes cause most of the seismic energy from the rupture to arrive in a single coherent long-period pulse of motion and permanent ground displacements. Finally, a vulnerability of RC building can be evaluated against pulse-like near-fault ground motions effects.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... Staff Guidance on Implementation of a Seismic Margin Analysis for New Reactors Based on Probabilistic... Seismic Margin Analysis for New Reactors Based on Probabilistic Risk Assessment,'' (Agencywide Documents.../COL-ISG-020 ``Implementation of a Seismic Margin Analysis for New Reactors Based on Probabilistic Risk...
Detection and analysis of a transient energy burst with beamforming of multiple teleseismic phases
NASA Astrophysics Data System (ADS)
Retailleau, Lise; Landès, Matthieu; Gualtieri, Lucia; Shapiro, Nikolai M.; Campillo, Michel; Roux, Philippe; Guilbert, Jocelyn
2018-01-01
Seismological detection methods are traditionally based on picking techniques. These methods cannot be used to analyse emergent signals where the arrivals cannot be picked. Here, we detect and locate seismic events by applying a beamforming method that combines multiple body-wave phases to USArray data. This method explores the consistency and characteristic behaviour of teleseismic body waves that are recorded by a large-scale, still dense, seismic network. We perform time-slowness analysis of the signals and correlate this with the time-slowness equivalent of the different body-wave phases predicted by a global traveltime calculator, to determine the occurrence of an event with no a priori information about it. We apply this method continuously to one year of data to analyse the different events that generate signals reaching the USArray network. In particular, we analyse in detail a low-frequency secondary microseismic event that occurred on 2010 February 1. This event, that lasted 1 d, has a narrow frequency band around 0.1 Hz, and it occurred at a distance of 150° to the USArray network, South of Australia. We show that the most energetic phase observed is the PKPab phase. Direct amplitude analysis of regional seismograms confirms the occurrence of this event. We compare the seismic observations with models of the spectral density of the pressure field generated by the interferences between oceanic waves. We attribute the observed signals to a storm-generated microseismic event that occurred along the South East Indian Ridge.
Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method
NASA Astrophysics Data System (ADS)
Bin Mohamad, Edy Tonnizam; Saad, Rosli; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia
2010-12-01
Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.
Tomographic imaging of the shallow crustal structure of the East Pacific Rise at 9 deg 30 min N
NASA Astrophysics Data System (ADS)
Toomey, Douglas R.; Solomon, Sean C.; Purdy, G. M.
1994-12-01
Compressional wave travel times from a seismic tomography experiment at 9 deg 30 min N on the East Pacific Rise are analyzed by a new tomographic method to determine the three-dimensional seismic velocity structure of the upper 2.5 km of oceanic crust within a 20 x 18 km area centered on the rise axis. The data comprise the travel times and associated uncertainties of 1459 compressional waves that have propagated above the axial magma chamber. A careful analysis of source and receiver parameters, in conjunction with an automated method of picking P wave onsets and assigning uncertainties, constrains the prior uncertainty in the data to 5 to 20 ms. The new tomographic method employs graph theory to estimate ray paths and travel times through strongly heterogeneous and densely parameterized seismic velocity models. The nonlinear inverse method uses a jumping strategy to minimize a functional that includes the penalty function, horizontal and vertical smoothing constraints, and prior model assumptions; all constraints applied to model perturbations are normalized to remove bias. We use the tomographic method to reject the null hypothesis that the axial seismic structure is two-dimensional. Three-dimensional models reveal a seismic structure that correlates well with cross- and along-axis variations in seafloor morphology, the location of the axial summit caldera, and the distribution of seafloor hydrothermal activity. The along-axis segmentation of the seismic structure above the axial magma chamber is consistent with the hypothesis that mantle-derived melt is preferentially injected midway along a locally linear segment of the rise and that the architecture of the crustal section is characterized by an en echelon series of elongate axial volcanoes approximately 10 km in length. The seismic data are compatible with a 300- to 500-m-thick thermal anomaly above a midcrustal melt lens; such an interpretation suggests that hydrothermal fluids may not have penetrated this region in the last 10(exp 3) years. Asymmetries in the seismic structure across the rise support the inferences that the thickness of seismic layer 2 and the average midcrustal temperature increase to the west of the rise axis. These anomalies may be the result of off-axis magmatism; alternatively, the asymmetric thermal anomaly may be the consequence of differences in the depth extent of hydrothermal cooling.
NASA Astrophysics Data System (ADS)
Huang, Shieh-Kung; Loh, Chin-Hsiung; Chen, Chin-Tsun
2016-04-01
Seismic records collected from earthquake with large magnitude and far distance may contain long period seismic waves which have small amplitude but with dominant period up to 10 sec. For a general situation, the long period seismic waves will not endanger the safety of the structural system or cause any uncomfortable for human activity. On the contrary, for those far distant earthquakes, this type of seismic waves may cause a glitch or, furthermore, breakdown to some important equipments/facilities (such as the high-precision facilities in high-tech Fab) and eventually damage the interests of company if the amplitude becomes significant. The previous study showed that the ground motion features such as time-variant dominant frequencies extracted using moving window singular spectrum analysis (MWSSA) and amplitude characteristics of long-period waves identified from slope change of ground motion Arias Intensity can efficiently indicate the damage severity to the high-precision facilities. However, embedding a large hankel matrix to extract long period seismic waves make the MWSSA become a time-consumed process. In this study, the seismic ground motion data collected from broadband seismometer network located in Taiwan were used (with epicenter distance over 1000 km). To monitor the significant long-period waves, the low frequency components of these seismic ground motion data are extracted using wavelet packet transform (WPT) to obtain wavelet coefficients and the wavelet entropy of coefficients are used to identify the amplitude characteristics of long-period waves. The proposed method is a timesaving process compared to MWSSA and can be easily implemented for real-time detection. Comparison and discussion on this method among these different seismic events and the damage severity to the high-precision facilities in high-tech Fab is made.
Improving fault image by determination of optimum seismic survey parameters using ray-based modeling
NASA Astrophysics Data System (ADS)
Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali
2018-06-01
In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.
Parsons, T.; Blakely, R.J.; Brocher, T.M.
2001-01-01
The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.
Effects of volcano topography on seismic broad-band waveforms
NASA Astrophysics Data System (ADS)
Neuberg, Jürgen; Pointer, Tim
2000-10-01
Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.
Seismic signature of turbulence during the 2017 Oroville Dam spillway erosion crisis
NASA Astrophysics Data System (ADS)
Goodling, Phillip J.; Lekic, Vedran; Prestegaard, Karen
2018-05-01
Knowing the location of large-scale turbulent eddies during catastrophic flooding events improves predictions of erosive scour. The erosion damage to the Oroville Dam flood control spillway in early 2017 is an example of the erosive power of turbulent flow. During this event, a defect in the simple concrete channel quickly eroded into a 47 m deep chasm. Erosion by turbulent flow is difficult to evaluate in real time, but near-channel seismic monitoring provides a tool to evaluate flow dynamics from a safe distance. Previous studies have had limited ability to identify source location or the type of surface wave (i.e., Love or Rayleigh wave) excited by different river processes. Here we use a single three-component seismometer method (frequency-dependent polarization analysis) to characterize the dominant seismic source location and seismic surface waves produced by the Oroville Dam flood control spillway, using the abrupt change in spillway geometry as a natural experiment. We find that the scaling exponent between seismic power and release discharge is greater following damage to the spillway, suggesting additional sources of turbulent energy dissipation excite more seismic energy. The mean azimuth in the 5-10 Hz frequency band was used to resolve the location of spillway damage. Observed polarization attributes deviate from those expected for a Rayleigh wave, though numerical modeling indicates these deviations may be explained by propagation up the uneven hillside topography. Our results suggest frequency-dependent polarization analysis is a promising approach for locating areas of increased flow turbulence. This method could be applied to other erosion problems near engineered structures as well as to understanding energy dissipation, erosion, and channel morphology development in natural rivers, particularly at high discharges.
Bayesian Inference for Signal-Based Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.
2015-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http
Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)
NASA Astrophysics Data System (ADS)
Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee
2010-12-01
Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review available statistical seismology software packages.
Possibility of Earthquake-prediction by analyzing VLF signals
NASA Astrophysics Data System (ADS)
Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta
2016-07-01
Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.
Blind Source Separation of Seismic Events with Independent Component Analysis: CTBT related exercise
NASA Astrophysics Data System (ADS)
Rozhkov, Mikhail; Kitov, Ivan
2015-04-01
Blind Source Separation (BSS) methods used in signal recovery applications are attractive for they use minimal a priori information about the signals they are dealing with. Homomorphic deconvolution and cepstrum estimation are probably the only methods used in certain extent in CTBT applications that can be attributed to the given branch of technology. However Expert Technical Analysis (ETA) conducted in CTBTO to improve the estimated values for the standard signal and event parameters according to the Protocol to the CTBT may face problems which cannot be resolved with certified CTBTO applications and may demand specific techniques not presently used. The problem to be considered within the ETA framework is the unambiguous separation of signals with close arrival times. Here, we examine two scenarios of interest: (1) separation of two almost co-located explosions conducted within fractions of seconds, and (2) extraction of explosion signals merged with wavetrains from strong earthquake. The importance of resolving the problem related to case 1 is connected with the correct explosion yield estimation. Case 2 is a well-known scenario of conducting clandestine nuclear tests. While the first case can be approached somehow with the means of cepstral methods, the second case can hardly be resolved with the conventional methods implemented at the International Data Centre, especially if the signals have close slowness and azimuth. Independent Component Analysis (in its FastICA implementation) implying non-Gaussianity of the underlying processes signal's mixture is a blind source separation method that we apply to resolve the mentioned above problems. We have tested this technique with synthetic waveforms, seismic data from DPRK explosions and mining blasts conducted within East-European platform as well as with signals from strong teleseismic events (Sumatra, April 2012 Mw=8.6, and Tohoku, March 2011 Mw=9.0 earthquakes). The data was recorded by seismic arrays of the International Monitoring System of CTBTO and by small-aperture seismic array Mikhnevo (MHVAR) operated by the Institute of Geosphere Dynamics, Russian Academy of Sciences. Our approach demonstrated a good ability of separation of seismic sources with very close origin times and locations (hundreds of meters), and/or having close arrival times (fractions of seconds), and recovering their waveforms from the mixture. Perspectives and limitations of the method are discussed.
Report of the Workshop on Extreme Ground Motions at Yucca Mountain, August 23-25, 2004
Hanks, T.C.; Abrahamson, N.A.; Board, M.; Boore, D.M.; Brune, J.N.; Cornell, C.A.
2006-01-01
This Workshop has its origins in the probabilistic seismic hazard analysis (PSHA) for Yucca Mountain, the designated site of the underground repository for the nation's high-level radioactive waste. In 1998 the Nuclear Regulatory Commission's Senior Seismic Hazard Analysis Committee (SSHAC) developed guidelines for PSHA which were published as NUREG/CR-6372, 'Recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and the use of experts,' (SSHAC, 1997). This Level-4 study was the most complicated and complex PSHA ever undertaken at the time. The procedures, methods, and results of this PSHA are described in Stepp et al. (2001), mostly in the context of a probability of exceedance (hazard) of 10-4/yr for ground motion at Site A, a hypothetical, reference rock outcrop site at the elevation of the proposed emplacement drifts within the mountain. Analysis and inclusion of both aleatory and epistemic uncertainty were significant and time-consuming aspects of the study, which took place over three years and involved several dozen scientists, engineers, and analysts.
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Facies analysis of an Upper Jurassic carbonate platform for geothermal reservoir characterization
NASA Astrophysics Data System (ADS)
von Hartmann, Hartwig; Buness, Hermann; Dussel, Michael
2017-04-01
The Upper Jurassic Carbonate platform in Southern Germany is an important aquifer for the production of geothermal energy. Several successful projects were realized during the last years. 3D-seismic surveying has been established as a standard method for reservoir analysis and the definition of well paths. A project funded by the federal ministry of economic affairs and energy (BMWi) started in 2015 is a milestone for an exclusively regenerative heat energy supply of Munich. A 3D-seismic survey of 170 square kilometer was acquired and a scientific program was established to analyze the facies distribution within the area (http://www.liag-hannover.de/en/fsp/ge/geoparamol.html). Targets are primarily fault zones where one expect higher flow rates than within the undisturbed carbonate sediments. However, since a dense net of geothermal plants and wells will not always find appropriate fault areas, the reservoir properties should be analyzed in more detail, e.g. changing the viewpoint to karst features and facies distribution. Actual facies interpretation concepts are based on the alternation of massif and layered carbonates. Because of successive erosion of the ancient land surfaces, the interpretation of reefs, being an important target, is often difficult. We found that seismic sequence stratigraphy can explain the distribution of seismic pattern and improves the analysis of different facies. We supported this method by applying wavelet transformation of seismic data. The splitting of the seismic signal into successive parts of different bandwidths, especially the frequency content of the seismic signal, changed by tuning or dispersion, is extracted. The combination of different frequencies reveals a partition of the platform laterally as well as vertically. A cluster analysis of the wavelet coefficients further improves this picture. The interpretation shows a division into ramp, inner platform and trough, which were shifted locally and overprinted in time by other objects, like lagoons or reefs and reef mounts. Faults within this area seem to be influenced by the facies distribution and otherwise, the deformation along the faults also depended on different lithologies. The reconstruction of the development of the carbonate platform can give hints also to erosional and karst processes. The results will be included into a numerical modelling of the geothermal reservoir to analyze the interaction of geothermal wells.
NASA Astrophysics Data System (ADS)
Faizah Bawadi, Nor; Anuar, Shamilah; Rahim, Mustaqqim A.; Mansor, A. Faizal
2018-03-01
A conventional and seismic method for determining the ultimate pile bearing capacity was proposed and compared. The Spectral Analysis of Surface Wave (SASW) method is one of the non-destructive seismic techniques that do not require drilling and sampling of soils, was used in the determination of shear wave velocity (Vs) and damping (D) profile of soil. The soil strength was found to be directly proportional to the Vs and its value has been successfully applied to obtain shallow bearing capacity empirically. A method is proposed in this study to determine the pile bearing capacity using Vs and D measurements for the design of pile and also as an alternative method to verify the bearing capacity from the other conventional methods of evaluation. The objectives of this study are to determine Vs and D profile through frequency response data from SASW measurements and to compare pile bearing capacities obtained from the method carried out and conventional methods. All SASW test arrays were conducted near the borehole and location of conventional pile load tests. In obtaining skin and end bearing pile resistance, the Hardin and Drnevich equation has been used with reference strains obtained from the method proposed by Abbiss. Back analysis results of pile bearing capacities from SASW were found to be 18981 kN and 4947 kN compared to 18014 kN and 4633 kN of IPLT with differences of 5% and 6% for Damansara and Kuala Lumpur test sites, respectively. The results of this study indicate that the seismic method proposed in this study has the potential to be used in estimating the pile bearing capacity.
Method of migrating seismic records
Ober, Curtis C.; Romero, Louis A.; Ghiglia, Dennis C.
2000-01-01
The present invention provides a method of migrating seismic records that retains the information in the seismic records and allows migration with significant reductions in computing cost. The present invention comprises phase encoding seismic records and combining the encoded seismic records before migration. Phase encoding can minimize the effect of unwanted cross terms while still allowing significant reductions in the cost to migrate a number of seismic records.
NASA Astrophysics Data System (ADS)
Králik, Juraj; Králik, Juraj
2017-07-01
The paper presents the results from the deterministic and probabilistic analysis of the accidental torsional effect of reinforced concrete tall buildings due to earthquake even. The core-column structural system was considered with various configurations in plane. The methodology of the seismic analysis of the building structures in Eurocode 8 and JCSS 2000 is discussed. The possibilities of the utilization the LHS method to analyze the extensive and robust tasks in FEM is presented. The influence of the various input parameters (material, geometry, soil, masses and others) is considered. The deterministic and probability analysis of the seismic resistance of the structure was calculated in the ANSYS program.
Using seismic derived lithology parameters for hydrocarbon indication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Riel, P.; Sisk, M.
1996-08-01
The last two decades have shown a strong increase in the use of seismic amplitude information for direct hydrocarbon indication. However, working with seismic amplitudes (and seismic attributes) has several drawbacks: tuning effects must be handled; quantitative analysis is difficult because seismic amplitudes are not directly related to lithology; and seismic amplitudes are reflection events, making it is unclear if amplitude changes relate to lithology variations above or below the interface. These drawbacks are overcome by working directly on seismic derived lithology data, lithology being a layer property rather than an interface property. Technology to extract lithology from seismic datamore » has made great strides, and a large range of methods are now available to users including: (1) Bandlimited acoustic impedance (AI) inversion; (2) Reconstruction of the low AI frequencies from seismic velocities, from spatial well log interpolation, and using constrained sparse spike inversion techniques; (3) Full bandwidth reconstruction of multiple lithology properties (porosity, sand fraction, density etc.,) in time and depth using inverse modeling. For these technologies to be fully leveraged, accessibility by end users is critical. All these technologies are available as interactive 2D and 3D workstation applications, integrated with seismic interpretation functionality. Using field data examples, we will demonstrate the impact of these different approaches on deriving lithology, and in particular show how accuracy and resolution is increased as more geologic and well information is added.« less
Seismic detection and analysis of icequakes at Columbia Glacier, Alaska
O'Neel, Shad; Marshall, Hans P.; McNamara, Daniel E.; Pfeffer, William Tad
2007-01-01
Contributions to sea level rise from rapidly retreating marine-terminating glaciers are large and increasing. Strong increases in iceberg calving occur during retreat, which allows mass transfer to the ocean at a much higher rate than possible through surface melt alone. To study this process, we deployed an 11-sensor passive seismic network at Columbia Glacier, Alaska, during 2004–2005. We show that calving events generate narrow-band seismic signals, allowing frequency domain detections. Detection parameters were determined using direct observations of calving and validated using three statistical methods and hypocenter locations. The 1–3 Hz detections provide a good measure of the temporal distribution and size of calving events. Possible source mechanisms for the unique waveforms are discussed, and we analyze potential forcings for the observed seismicity.
Global regionalized seismicity in view of Non-Extensive Statistical Physics
NASA Astrophysics Data System (ADS)
Chochlaki, Kalliopi; Vallianatos, Filippos; Michas, Georgios
2018-03-01
In the present work we study the distribution of Earth's shallow seismicity on different seismic zones, as occurred from 1981 to 2011 and extracted from the Centroid Moment Tensor (CMT) catalog. Our analysis is based on the subdivision of the Earth's surface into seismic zones that are homogeneous with regards to seismic activity and orientation of the predominant stress field. For this, we use the Flinn-Engdahl regionalization (FE) (Flinn and Engdahl, 1965), which consists of fifty seismic zones as modified by Lombardi and Marzocchi (2007). The latter authors grouped the 50 FE zones into larger tectonically homogeneous ones, utilizing the cumulative moment tensor method, resulting into thirty-nine seismic zones. In each one of these seismic zones we study the distribution of seismicity in terms of the frequency-magnitude distribution and the inter-event time distribution between successive earthquakes, a task that is essential for hazard assessments and to better understand the global and regional geodynamics. In our analysis we use non-extensive statistical physics (NESP), which seems to be one of the most adequate and promising methodological tools for analyzing complex systems, such as the Earth's seismicity, introducing the q-exponential formulation as the expression of probability distribution function that maximizes the Sq entropy as defined by Tsallis, (1988). The qE parameter is significantly greater than one for all the seismic regions analyzed with value range from 1.294 to 1.504, indicating that magnitude correlations are particularly strong. Furthermore, the qT parameter shows some temporal correlations but variations with cut-off magnitude show greater temporal correlations when the smaller magnitude earthquakes are included. The qT for earthquakes with magnitude greater than 5 takes values from 1.043 to 1.353 and as we increase the cut-off magnitude to 5.5 and 6 the qT value ranges from 1.001 to 1.242 and from 1.001 to 1.181 respectively, presenting a significant decrease. Our findings support the ideas of universality within the Tsallis approach to describe Earth's seismicity and present strong evidence ontemporal clustering and long-range correlations of seismicity in each of the tectonic zonesanalyzed.
Automatic classification of seismic events within a regional seismograph network
NASA Astrophysics Data System (ADS)
Tiira, Timo; Kortström, Jari; Uski, Marja
2015-04-01
A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 2: Example of Calculation
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The second part of the article illustrates the use of the time analysis method (TAM) by the example of the calculation of a 3-storey building, the design dynamic model (DDM) of which is adopted in the form of a flat vertical cantilever rod with 3 horizontal degrees of freedom associated with floor and coverage levels. The parameters of natural oscillations (frequencies and modes) and the results of the calculation of the elastic forced oscillations of the building’s DDM - oscillograms of the reaction parameters on the time interval t ∈ [0; 131,25] sec. The obtained results are analyzed on the basis of the computed values of the discrepancy of the DDS motion equation and the comparison of the results calculated on the basis of the numerical approach (FEM) and the normative method set out in SP 14.13330.2014 “Construction in Seismic Regions”. The data of the analysis testify to the accuracy of the construction of the computational model as well as the high accuracy of the results obtained. In conclusion, it is revealed that the use of the TAM will improve the strength of buildings and structures subject to seismic influences when designing them.
Picozzi, Matteo; Milkereit, Claus; Parolai, Stefano; Jaeckel, Karl-Heinz; Veit, Ingo; Fischer, Joachim; Zschau, Jochen
2010-01-01
Over the last few years, the analysis of seismic noise recorded by two dimensional arrays has been confirmed to be capable of deriving the subsoil shear-wave velocity structure down to several hundred meters depth. In fact, using just a few minutes of seismic noise recordings and combining this with the well known horizontal-to-vertical method, it has also been shown that it is possible to investigate the average one dimensional velocity structure below an array of stations in urban areas with a sufficient resolution to depths that would be prohibitive with active source array surveys, while in addition reducing the number of boreholes required to be drilled for site-effect analysis. However, the high cost of standard seismological instrumentation limits the number of sensors generally available for two-dimensional array measurements (i.e., of the order of 10), limiting the resolution in the estimated shear-wave velocity profiles. Therefore, new themes in site-effect estimation research by two-dimensional arrays involve the development and application of low-cost instrumentation, which potentially allows the performance of dense-array measurements, and the development of dedicated signal-analysis procedures for rapid and robust estimation of shear-wave velocity profiles. In this work, we present novel low-cost wireless instrumentation for dense two-dimensional ambient seismic noise array measurements that allows the real–time analysis of the surface-wavefield and the rapid estimation of the local shear-wave velocity structure for site response studies. We first introduce the general philosophy of the new system, as well as the hardware and software that forms the novel instrument, which we have tested in laboratory and field studies. PMID:22319298
Seismic modeling of complex stratified reservoirs
NASA Astrophysics Data System (ADS)
Lai, Hung-Liang
Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex vertical and lateral heterogeneity that is difficult to measure directly, stochastic modeling is often used to predict the range of possible seismic responses. Though binary models containing mixtures of sands and shales have been proposed in previous work, log measurements show that these are not good representations of real seismic properties. Therefore, I develop a new approach for generating stochastic turbidite models (STM) from a combination of geological interpretation and well log measurements that are more realistic. Calculations of the composite reflection coefficient and synthetic seismograms predict direct hydrocarbon indicators associated with such turbidite sequences. The STMs provide important insights to predict the seismic responses for the complexity of turbidite reservoirs. Results of AVO responses predict the presence of gas saturation in the sand beds. For example, as the source frequency increases, the uncertainty in AVO responses for brine and gas sands predict the possibility of false interpretation in AVO analysis.
NASA Astrophysics Data System (ADS)
Wolbang, Daniel; Biernat, Helfried; Schwingenschuh, Konrad; Eichelberger, Hans; Prattes, Gustav; Besser, Bruno; Boudjada, Mohammed Y.; Rozhnoi, Alexander; Solovieva, Maria; Biagi, Pier Francesco; Friedrich, Martin
2013-04-01
We present a comparative study of seismic and non-seismic sub-ionospheric VLF anomalies. Our method is based on parameter variations of the sub-ionospheric VLF waveguide formed by the surface and the lower ionosphere. The used radio links working in the frequency range between 10 and 50 kHz, the receivers are part of the European and Russian networks. Various authors investigated the lithopsheric-atmospheric-ionospheric coupling and predicted the lowering of the ionosphere over earthquake preparation zones [1]. The received nighttime signal of a sub-ionospheric waveguide depends strongly on the height of the ionospheric E-layer, typically 80 to 85 km. This height is characterized by a typical gradient of the electron density near the atmospheric-ionospheric boundary [2]. In the last years it has been turned out that one of the major issues of sub-ionospheric seismo-electromagnetic VLF studies are the non-seismic influences on the links, which have to be carefully characterized. Among others this could be traveling ionospheric disturbances, geomagnetic storms as well as electron precipitation. Our emphasis is on the analysis of daily, monthly and annual variations of the VLF amplitude. To improve the statistics we investigate the behavior and typical variations of the VLF amplitude and phase over a period of more than 2 years. One important parameter considered is the rate how often the fluctuations are falling below a significant level derived from a mean value. The temporal variations and the amplitudes of these depressions are studied for several years for sub-ionospheric VLF radio links with the receivers in Graz and Kamchatka. In order to study the difference between seismic and non-seismic turbulences in the lower ionosphere a power spectrum analysis of the received signal is performed too. We are especially interested in variations T>6 min which are typical for atmospheric gravity waves causing the lithospheric-atmospheric-ionospheric coupling [3]. All measured and derived VLF parameters are compared with VLF observations several weeks before an earthquake (e.g. L'Aquila, Italy, April 6, 2009) and with co- and post-seismic phenomena. It is shown that this comparative study will improve the one parameter seismo-electromagnetic VLF methods. References: [1] A. Molchanov, M. Hayakawa: Seismo-Electromagnetics and related Phenomena: History and latest results, Terrapub, 2008. [2] S. Pulinets, K. Boyarchuk: Ionospheric Precursors of Earthquakes, Springer, 2004 [3] A. Rozhnoi et al.: Observation evidences of atmospheric Gravity Waves induced by seismic activity from analysis of subionospheric LF signal spectra, National Hazards and Earth System Sciences, 7, 625-628, 2007.
Finite element analyses for seismic shear wall international standard problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.J.; Hofmayer, C.H.
Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation`s (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structuresmore » used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs.« less
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2011-06-01
Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.
NASA Astrophysics Data System (ADS)
Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.
2008-11-01
Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.
Determining the effective system damping of highway bridges.
DOT National Transportation Integrated Search
2009-06-01
This project investigates four methods for modeling modal damping ratios of short-span and isolated : concrete bridges subjected to strong ground motion, which can be used for bridge seismic analysis : and design based on the response spectrum method...
NASA Astrophysics Data System (ADS)
Schaefer, A. M.; Daniell, J. E.; Wenzel, F.
2014-12-01
Earthquake clustering tends to be an increasingly important part of general earthquake research especially in terms of seismic hazard assessment and earthquake forecasting and prediction approaches. The distinct identification and definition of foreshocks, aftershocks, mainshocks and secondary mainshocks is taken into account using a point based spatio-temporal clustering algorithm originating from the field of classic machine learning. This can be further applied for declustering purposes to separate background seismicity from triggered seismicity. The results are interpreted and processed to assemble 3D-(x,y,t) earthquake clustering maps which are based on smoothed seismicity records in space and time. In addition, multi-dimensional Gaussian functions are used to capture clustering parameters for spatial distribution and dominant orientations. Clusters are further processed using methodologies originating from geostatistics, which have been mostly applied and developed in mining projects during the last decades. A 2.5D variogram analysis is applied to identify spatio-temporal homogeneity in terms of earthquake density and energy output. The results are mitigated using Kriging to provide an accurate mapping solution for clustering features. As a case study, seismic data of New Zealand and the United States is used, covering events since the 1950s, from which an earthquake cluster catalogue is assembled for most of the major events, including a detailed analysis of the Landers and Christchurch sequences.
2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis
NASA Astrophysics Data System (ADS)
Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek
2016-04-01
Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.
NASA Astrophysics Data System (ADS)
Gaudiosi, Germana; Nappi, Rosa; Alessio, Giuliana; Cella, Federico; Fedi, Maurizio; Florio, Giovanni
2014-05-01
The Southern Apennines is one of the Italian most active areas from a geodynamic point of view since it is characterized by occurrence of intense and widely spread seismic activity. Most seismicity of the area is concentrated along the chain, affecting mainly the Irpinia and Sannio-Matese areas. The seismogenetic sources responsible for the destructive events of 1456, 1688, 1694, 1702, 1732, 1805, 1930, 1962 and 1980 (Io = X-XI MCS) occurred mostly on NW-SE faults, and the relative hypocenters are concentrated within the upper 20 km of the crust. Structural observations on the Pleistocene faults suggest normal to sinistral movements for the NW-SE trending faults and normal to dextral for the NE-SW trending structures. The available focal mechanisms of the largest events show normal solutions consistent with NE-SW extension of the chain. After the 1980 Irpinia large earthquake, the release of seismic energy in the Southern Apennines has been characterized by occurrence of moderate energy sequences of main shock-aftershocks type and swarm-type activity with low magnitude sequences. Low-magnitude (Md<5) historical and recent earthquakes, generally clustered in swarms, have commonly occurred along the NE-SW faults. This paper deals with integrated analysis of geological and geophysical data in GIS environment to identify surface, buried and hidden active faults and to characterize their geometry. In particular we have analyzed structural data, earthquake space distribution and gravimetric data. The main results of the combined analysis indicate good correlation between seismicity and Multiscale Derivative Analysis (MDA) lineaments from gravity data. Furthermore 2D seismic hypocentral locations together with high-resolution analysis of gravity anomalies have been correlated to estimate the fault systems parameters (strike, dip direction and dip angle) through the application of the DEXP method (Depth from Extreme Points).
Seismic Analysis of the 2017 Oroville Dam Spillway Erosion Crisis
NASA Astrophysics Data System (ADS)
Goodling, P.; Lekic, V.; Prestegaard, K. L.
2017-12-01
The outflow channel of the northern California (USA) Oroville Dam suffered catastrophic erosion damage in February and March, 2017. High discharges released through the spillway (up to 3,000 m3/s) caused rapid spillway erosion, forming a deep chasm. A repeat LiDAR survey obtained from the California Department of Water Resources indicates that the chasm eroded to a depth of 48 meters. A three-component broadband seismometer (STS-1) operated by the Berkeley Digital Seismological Network recorded microseismic energy produced by the flowing water, providing a natural laboratory to test methods for seismically monitoring sudden catastrophic floods and erosion. In this study, we evaluate the three-component waveforms recorded during five constant-discharge periods - before, during, and after the spillway crisis - each of which had a different channel geometry. We apply frequency-dependent polarization analysis (FDPA; following Park, 1987), which characterizes particle motion at each frequency. The method is based on principal component analysis on a spectral covariance matrix in one-hour windows and it produces the horizontal azimuth, vertical tilt, horizontal phase, and vertical phase of the dominant particle motion. The results indicate a greater vertical component (perhaps roughness-induced) of power at a broad range of frequencies at a given discharge after the formation of the chasm. As the outflow crater developed, the back-azimuth of the primary source of seismic energy changed from the nearby Thermalito Diversion Pool (188 degrees) to the center of the outflow channel (170 degrees). To further analyze FDPA results, we apply the 2D spectral-element solver package SPECFEM2D (Tromp et al. 2008), and find that local topography should be considered when interpreting the surface waveforms predicted by FDPA results. This research suggests that monitoring changing channel geometry and erosion in large-scale flood events may be enhanced by seismic FDPA analysis. The results of this work are compared and contrasted with 3-component seismic observations of cobble-bed stream floods in Maryland.
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
NASA Astrophysics Data System (ADS)
Goldgruber, Markus; Shahriari, Shervin; Zenz, Gerald
2015-11-01
To reduce the natural hazard risks—due to, e.g., earthquake excitation—seismic safety assessments are carried out. Especially under severe loading, due to maximum credible or the so-called safety evaluation earthquake, critical infrastructure, as these are high dams, must not fail. However, under high loading local failure might be allowed as long as the entire structure does not collapse. Hence, for a dam, the loss of sliding stability during a short time period might be acceptable if the cumulative displacements after an event are below an acceptable value. This performance is not only valid for gravity dams but also for rock blocks as sliding is even more imminent in zones with higher seismic activity. Sliding modes cannot only occur in the dam-foundation contact, but also in sliding planes formed due to geological conditions. This work compares the qualitative possible and critical displacements for two methods, the well-known Newmark's sliding block analysis and a Fluid-Foundation-Structure Interaction simulation with the finite elements method. The results comparison of the maximum displacements at the end of the seismic event of the two methods depicts that for high friction angles, they are fairly close. For low friction angles, the results are differing more. The conclusion is that the commonly used Newmark's sliding block analysis and the finite elements simulation are only comparable for high friction angles, where this factor dominates the behaviour of the structure. Worth to mention is that the proposed simulation methods are also applicable to dynamic rock wedge problems and not only to dams.
NASA Astrophysics Data System (ADS)
Raziperchikolaee, Samin
The pore pressure variation in an underground formation during hydraulic stimulation of low permeability formations or CO2 sequestration into saline aquifers can induce microseismicity due to fracture generation or pre-existing fracture activation. While the analysis of microseismic data mainly focuses on mapping the location of fractures, the seismic waves generated by the microseismic events also contain information for understanding of fracture mechanisms based on microseismic source analysis. We developed a micro-scale geomechanics, fluid-flow and seismic model that can predict transport and seismic source behavior during rock failure. This model features the incorporation of microseismic source analysis in fractured and intact rock transport properties during possible rock damage and failure. The modeling method considers comprehensive grains and cements interaction through a bonded-particle-model. As a result of grain deformation and microcrack development in the rock sample, forces and displacements in the grains involved in the bond breakage are measured to determine seismic moment tensor. In addition, geometric description of the complex pore structure is regenerated to predict fluid flow behavior of fractured samples. Numerical experiments are conducted for different intact and fractured digital rock samples, representing various mechanical behaviors of rocks and fracture surface properties, to consider their roles on seismic and transport properties of rocks during deformation. Studying rock deformation in detail provides an opportunity to understand the relationship between source mechanism of microseismic events and transport properties of damaged rocks to have a better characterizing of fluid flow behavior in subsurface formations.
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, T. K.; Kim, W.; Hong, T. K.
2017-12-01
The Korean Peninsula is located in a stable intraplate regime with relatively low seismicity. The seismicity in the Korean Peninsula was, however, changed significantly after the 11 March 2011 M9.0 Tohoku-Oki megathrust earthquake. An M5.0 earthquake occurred in 2016 at the region off the southeastern Korean Peninsula. The M5.0 earthquake was the largest event in the region since 1978 when the national seismic monitoring began. Several nuclear power plants are placed near the region. It is requested to understand the seismo-tectonic structures of the region, which may be crucial for mitigation of seismic hazards. Analysis of seismicity may be useful for illumination of fault structures. We investigate the focal mechanism solutions, ambient stress field, and spatial distribution of earthquakes. It is intriguing to note that the number of earthquakes increased since the 2011 Tohoku-Oki earthquake. We refined the hypocenters of 52 events using a velocity-searching hypocentral inversion method (VELHYPO). We determined the focal mechanism solutions of 25 events using a P polarity analysis and long period waveform inversion. The ambient stress field was inferred from the focal mechanism solutions. Strike-slip events occurred dominantly although the paleo-tectonic structures suggest the presence of thrust faults in the region. We observe that the compressional stress field is applied in ENE-WSW, which may be a combination of lateral compressions from the Pacific and Philippine Sea plates. The active strike-slip events and compressional stress field suggest reactivation of paleo-tectonic structures.
Seismic risk analysis for the Babcock and Wilcox facility, Leechburg, Pennsylvania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-10-21
The results of a detailed seismic risk analysis of the Babcock and Wilcox Plutonium Fuel Fabrication facility at Leechburg, Pennsylvania are presented. This report focuses on earthquakes; the other natural hazards, being addressed in separate reports, are severe weather (strong winds and tornados) and floods. The calculational method used is based on Cornell's work (1968); it has been previously applied to safety evaluations of major projects. The historical seismic record was established after a review of available literature, consultation with operators of local seismic arrays and examination of appropriate seismic data bases. Because of the aseismicity of the region aroundmore » the site, an analysis different from the conventional closest approach in a tectonic province was adapted. Earthquakes as far from the site as 1,000 km were included, as were the possibility of earthquakes at the site. In addition, various uncertainties in the input were explicitly considered in the analysis. The results of the risk analysis, which include a Bayesian estimate of the uncertainties, are presented, expressed as return period accelerations. The best estimate curve indicates that the Babcock and Wilcox facility will experience 0.05 g every 220 years and 0.10 g every 1400 years. The bounding curves roughly represent the one standard deviation confidence limits about the best estimate, reflecting the uncertainty in certain of the input. Detailed examination of the results show that the accelerations are very insensitive to the details of the source region geometries or the historical earthquake statistics in each region and that each of the source regions contributes almost equally to the cumulative risk at the site. If required for structural analysis, acceleration response spectra for the site can be constructed by scaling the mean response spectrum for alluvium in WASH 1255 by these peak accelerations.« less
Dimensional Representation and Gradient Boosting for Seismic Event Classification
NASA Astrophysics Data System (ADS)
Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
Currently the Department of Energy (DOE) and the nuclear industry perform seismic soil-structure interaction (SSI) analysis using equivalent linear numerical analysis tools. For lower levels of ground motion, these tools should produce reasonable in-structure response values for evaluation of existing and new facilities. For larger levels of ground motion these tools likely overestimate the in-structure response (and therefore structural demand) since they do not consider geometric nonlinearities (such as gaping and sliding between the soil and structure) and are limited in the ability to model nonlinear soil behavior. The current equivalent linear SSI (SASSI) analysis approach either joins the soilmore » and structure together in both tension and compression or releases the soil from the structure for both tension and compression. It also makes linear approximations for material nonlinearities and generalizes energy absorption with viscous damping. This produces the potential for inaccurately establishing where the structural concerns exist and/or inaccurately establishing the amplitude of the in-structure responses. Seismic hazard curves at nuclear facilities have continued to increase over the years as more information has been developed on seismic sources (i.e. faults), additional information gathered on seismic events, and additional research performed to determine local site effects. Seismic hazard curves are used to develop design basis earthquakes (DBE) that are used to evaluate nuclear facility response. As the seismic hazard curves increase, the input ground motions (DBE’s) used to numerically evaluation nuclear facility response increase causing larger in-structure response. As ground motions increase so does the importance of including nonlinear effects in numerical SSI models. To include material nonlinearity in the soil and geometric nonlinearity using contact (gaping and sliding) it is necessary to develop a nonlinear time domain methodology. This methodology will be known as, NonLinear Soil-Structure Interaction (NLSSI). In general NLSSI analysis should provide a more accurate representation of the seismic demands on nuclear facilities their systems and components. INL, in collaboration with a Nuclear Power Plant Vender (NPP-V), will develop a generic Nuclear Power Plant (NPP) structural design to be used in development of the methodology and for comparison with SASSI. This generic NPP design has been evaluated for the INL soil site because of the ease of access and quality of the site specific data. It is now being evaluated for a second site at Vogtle which is located approximately 15 miles East-Northeast of Waynesboro, Georgia and adjacent to Savanna River. The Vogtle site consists of many soil layers spanning down to a depth of 1058 feet. The reason that two soil sites are chosen is to demonstrate the methodology across multiple soil sites. The project will drive the models (soil and structure) using successively increasing acceleration time histories with amplitudes. The models will be run in time domain codes such as ABAQUS, LS-DYNA, and/or ESSI and compared with the same models run in SASSI. The project is focused on developing and documenting a method for performing time domain, non-linear seismic soil structure interaction (SSI) analysis. Development of this method will provide the Department of Energy (DOE) and industry with another tool to perform seismic SSI analysis.« less
NASA Astrophysics Data System (ADS)
Liu, Xiaofei; Zhang, Qiuwen
2016-11-01
Studies have considered the many factors involved in the mechanism of reservoir seismicity. Focusing on the correlation between reservoir-induced seismicity and the water level, this study proposes to utilize copula theory to build a correlation model to analyze their relationships and perform the risk analysis. The sequences of reservoir induced seismicity events from 2003 to 2011 in the Three Gorges reservoir in China are used as a case study to test this new methodology. Next, we construct four correlation models based on the Gumbel, Clayton, Frank copula and M-copula functions and employ four methods to test the goodness of fit: Q-Q plots, the Kolmogorov-Smirnov (K-S) test, the minimum distance (MD) test and the Akaike Information Criterion (AIC) test. Through a comparison of the four models, the M-copula model fits the sample better than the other three models. Based on the M-copula model, we find that, for the case of a sudden drawdown of the water level, the possibility of seismic frequency decreasing obviously increases, whereas for the case of a sudden rising of the water level, the possibility of seismic frequency increasing obviously increases, with the former being greater than the latter. The seismic frequency is mainly distributed in the low-frequency region (Y ⩽ 20) for the low water level and in the middle-frequency region (20 < Y ≤ 80) for both the medium and high water levels; the seismic frequency in the high-frequency region (Y > 80) is the least likely. For the conditional return period, it can be seen that the period of the high-frequency seismicity is much longer than those of the normal and medium frequency seismicity, and the high water level shortens the periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter
2008-07-08
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performedmore » on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m{sup 2} with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.« less
NASA Astrophysics Data System (ADS)
Shao, Xupeng
2017-04-01
Glutenite bodies are widely developed in northern Minfeng zone of Dongying Sag. Their litho-electric relationship is not clear. In addition, as the conventional sequence stratigraphic research method drawbacks of involving too many subjective human factors, it has limited deepening of the regional sequence stratigraphic research. The wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data have advantages of dividing sequence stratigraphy quantitatively comparing with the conventional methods. Under the basis of the conventional sequence research method, this paper used the above techniques to divide the fourth-order sequence of the upper Es4 in northern Minfeng zone of Dongying Sag. The research shows that the wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data are essentially consistent, both of which divide sequence stratigraphy quantitatively in the frequency domain; wavelet transform technique has high resolutions. It is suitable for areas with wells. The seismic time-frequency analysis technique has wide applicability, but a low resolution. Both of the techniques should be combined; the upper Es4 in northern Minfeng zone of Dongying Sag is a complete set of third-order sequence, which can be further subdivided into 5 fourth-order sequences that has the depositional characteristics of fine-upward sequence in granularity. Key words: Dongying sag, northern Minfeng zone, wavelet transform technique, time-frequency analysis technique ,the upper Es4, sequence stratigraphy
NASA Astrophysics Data System (ADS)
Han, Xiaolei; Li, Yaokun; Ji, Jing; Ying, Junhao; Li, Weichen; Dai, Baicheng
2016-06-01
In order to quantitatively study the seismic absorption effect of the cushion on a superstructure, a numerical simulation and parametric study are carried out on the overall FEA model of a rigid-pile composite foundation in ABAQUS. A simulation of a shaking table test on a rigid mass block is first completed with ABAQUS and EERA, and the effectiveness of the Drucker-Prager constitutive model and the finite-infinite element coupling method is proved. Dynamic time-history analysis of the overall model under frequent and rare earthquakes is carried out using seismic waves from the El Centro, Kobe, and Bonds earthquakes. The different responses of rigid-pile composite foundations and pile-raft foundations are discussed. Furthermore, the influence of thickness and modulus of cushion, and ground acceleration on the seismic absorption effect of the cushion are analyzed. The results show that: 1) the seismic absorption effect of a cushion is good under rare earthquakes, with an absorption ratio of about 0.85; and 2) the seismic absorption effect is strongly affected by cushion thickness and ground acceleration.
Fractal analysis of GPS time series for early detection of disastrous seismic events
NASA Astrophysics Data System (ADS)
Filatov, Denis M.; Lyubushin, Alexey A.
2017-03-01
A new method of fractal analysis of time series for estimating the chaoticity of behaviour of open stochastic dynamical systems is developed. The method is a modification of the conventional detrended fluctuation analysis (DFA) technique. We start from analysing both methods from the physical point of view and demonstrate the difference between them which results in a higher accuracy of the new method compared to the conventional DFA. Then, applying the developed method to estimate the measure of chaoticity of a real dynamical system - the Earth's crust, we reveal that the latter exhibits two distinct mechanisms of transition to a critical state: while the first mechanism has already been known due to numerous studies of other dynamical systems, the second one is new and has not previously been described. Using GPS time series, we demonstrate efficiency of the developed method in identification of critical states of the Earth's crust. Finally we employ the method to solve a practically important task: we show how the developed measure of chaoticity can be used for early detection of disastrous seismic events and provide a detailed discussion of the numerical results, which are shown to be consistent with outcomes of other researches on the topic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
NASA Astrophysics Data System (ADS)
Velez Gonzalez, Jose A.
The development of preferred crystal orientation fabrics (COF) within the ice column can have a strong influence on the flow behavior of an ice sheet or glacier. Typically, COF information comes from ice cores. Observations of anisotropic seismic wave propagation and backscatter variation as a function of antenna orientation in GPR measurements have been proposed as methods to detect COF. For this investigation I evaluate the effectiveness of the GPR and seismic methods to detect COF by conducting a seismic and GPR experiment at the North Greenland Eemian Ice Drilling facility (NEEM) ice core location, where COF data is available. The seismic experiment was conducted 6.5 km North West of the NEEM facility and consisted of three multi-offset seismic gathers. The results of the anisotropy analysis conducted at NEEM yielded mean c-axes distributed over a conical region of I angle of 30 to 32 degrees. No internal ice reflectors were imaged. Direct COF measurements collected in the ice core are in agreement with the results from the seismic anisotropy analysis. The GPR experiment covered an area of 100 km2 and consisted of parallel, perpendicular, oblique and circular (radius: 35 m) acquisition patterns. Results show evidence for COF for the entire 100 km2 area. Furthermore, for the first time it was possible to image three different COF (random, disk and single maxima) and their respective transition zones. The interpretation of the GPR experiment showed a strong correlation with the ice core measurements. Glacier basal drag is also an important, and difficult to predict, property that influences glacier flow. For this investigation I re-processed a 10 km-long high-resolution reflection seismic line at Jakobshavn Isbrae, Greenland, using an iterative velocity determination approach for optimizing sub-glacier imaging. The resultant line imaged a sub-glacier sediment layer ranging in thickness between 35 and 200 meters. I interpret three distinct seismic facies based on the geometry of the reflectors as a basal till layer, accreted sediments and re-worked till. The basal till and accreted sediments vary in thickness between 4 and 93 meters and are thought to be water-saturated actively-deforming sub-glacier sediments. A polarity reversal observed at one location along the ice-sediment interface suggests the presence of water saturated sediments or water ponding 2-4 m thick spanning approximately 240 m across. Using information from the seismic line (bed geometry, ice thickness, till thickness) as well as information available for the area of study (ice surface elevation and ice flow velocity) we evaluate the effect of sub-glacier sediment viscosity on the basal drag using a linearly viscous model and the assumption of a deforming bed. Basal drag values estimated for the study area fall within the range of physically acceptable values. However, the analysis revealed that the assumption of a deforming bed might not be compatible for the area of study given the presence of water at the ice/bed interface.
NASA Astrophysics Data System (ADS)
Gu, N.; Zhang, H.
2017-12-01
Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.
Estimating Local and Near-Regional Velocity and Attenuation Structure from Seismic Noise
2008-09-30
seismic array in Costa Rica and Nicaragua from ambient seismic noise using two independent methods, noise cross correlation and beamforming. The noise...Mean-phase velocity-dispersion curves are calculated for the TUCAN seismic array in Costa Rica and Nicaragua from ambient seismic noise using two...stations of the TUCAN seismic array (Figure 4c) using a method similar to Harmon et al. (2007). Variations from Harmon et al. (2007) include removing the
NASA Astrophysics Data System (ADS)
Nuñez-Cornu, F. J.; Barba, D. C., Sr.; Danobeitia, J.; Bandy, W. L.; Zamora-Camacho, A.; Marquez-Ramirez, V. H.; Ambros, M.; Gomez, A.; Sandoval, J. M.; Mortera-Gutierrez, C. A.
2016-12-01
The second stage of TsuJal Project includes the study of passive seismic activity in the region of the plate Rivera and Jalisco block by anchoring OBS and densifying the network of seismic stations on land for at least four months. This stage began in April 2016 with the deployment of 25 Obsidian stations with sensor Le-3D MkIII from the northern part of Nayarit state to the south of Colima state, including the Marias Islands. This temporal seismic network complements the Jalisco Seismic Network (RESAJ) for a total of 50 stations. Offshore, ten OBS type LCHEAPO 2000 with 4 channel (3 seismic short period and 1 pressure) were deployed, in the period from 19 to 30 April 2016 using the BO El Puma from UNAM. The OBS were deployed in an array from the Marias Islands to offcoast of the border of Colima and Michoacan states. On May 4, an earthquake with Ml = 4.2 took place in the contact area of the Rivera Plate, Cocos Plate and the Middle America Trench, subsequently occurred a seismic swarm with over 200 earthquakes until May 16, including an earthquake with Ml = 5.0 on May 7. A second swarm took place between May 28 and Jun 4 including an earthquake with Ml = 4.8 on Jun 1. An analysis of the quality of different location methods is presented: automatic preliminary RESAJ location using Antelope; location with revised RESAJ phases in Antelope; relocation of RESAJ data with hypo and a regional velocity model; relocation of RESAJ data with hypo adding data from the temporal seismic network stations; and finally the relocation adding the data from the OBS network. Moreover, the tectonic implications of these earthquakes are discussed.
Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data
NASA Astrophysics Data System (ADS)
Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan
2016-09-01
Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-07-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
NASA Astrophysics Data System (ADS)
Liang, Li; Takaaki, Ohkubo; Guang-hui, Li
2018-03-01
In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.
Effect of a Near Fault on the Seismic Response of a Base-Isolated Structure with a Soft Storey
NASA Astrophysics Data System (ADS)
Athamnia, B.; Ounis, A.; Abdeddaim, M.
2017-12-01
This study focuses on the soft-storey behavior of RC structures with lead core rubber bearing (LRB) isolation systems under near and far-fault motions. Under near-fault ground motions, seismic isolation devices might perform poorly because of large isolator displacements caused by large velocity and displacement pulses associated with such strong motions. In this study, four different structural models have been designed to study the effect of soft-storey behavior under near-fault and far-fault motions. The seismic analysis for isolated reinforced concrete buildings is carried out using a nonlinear time history analysis method. Inter-story drifts, absolute acceleration, displacement, base shear forces, hysteretic loops and the distribution of plastic hinges are examined as a result of the analysis. These results show that the performance of a base isolated RC structure is more affected by increasing the height of a story under nearfault motion than under far-fault motion.
Seismic hazard assessment: Issues and alternatives
Wang, Z.
2011-01-01
Seismic hazard and risk are two very important concepts in engineering design and other policy considerations. Although seismic hazard and risk have often been used inter-changeably, they are fundamentally different. Furthermore, seismic risk is more important in engineering design and other policy considerations. Seismic hazard assessment is an effort by earth scientists to quantify seismic hazard and its associated uncertainty in time and space and to provide seismic hazard estimates for seismic risk assessment and other applications. Although seismic hazard assessment is more a scientific issue, it deserves special attention because of its significant implication to society. Two approaches, probabilistic seismic hazard analysis (PSHA) and deterministic seismic hazard analysis (DSHA), are commonly used for seismic hazard assessment. Although PSHA has been pro-claimed as the best approach for seismic hazard assessment, it is scientifically flawed (i.e., the physics and mathematics that PSHA is based on are not valid). Use of PSHA could lead to either unsafe or overly conservative engineering design or public policy, each of which has dire consequences to society. On the other hand, DSHA is a viable approach for seismic hazard assessment even though it has been labeled as unreliable. The biggest drawback of DSHA is that the temporal characteristics (i.e., earthquake frequency of occurrence and the associated uncertainty) are often neglected. An alternative, seismic hazard analysis (SHA), utilizes earthquake science and statistics directly and provides a seismic hazard estimate that can be readily used for seismic risk assessment and other applications. ?? 2010 Springer Basel AG.
NASA Astrophysics Data System (ADS)
Gaudiosi, Germana; Nappi, Rosa; Alessio, Giuliana; Porfido, Sabina; Cella, Federico; Fedi, Maurizio; Florio, Giovanni
2015-04-01
This paper deals with an interdisciplinary research that has been carried out for more constraining the active faults and their geometry of Abruzzo - Molise areas (Central-Southern Apennines), two of the most active areas from a geodynamic point of view of the Italian Apennines, characterized by the occurrence of intense and widely spread seismic activity. An integrated analysis of structural, seismic and gravimetric (Gaudiosi et al., 2012) data of the area has been carried out through the Geographic Information System (GIS) which has provided the capability for storing and managing large amount of spatial data from different sources. In particular, the analysis has consisted of these main steps: (a) collection and acquisition of aerial photos, numeric cartography, Digital Terrain Model (DTM) data, geophysical data; (b) generation of the vector cartographic database and alpha-numerical data; c) image processing and features classification; d) cartographic restitution and multi-layers representation. In detail three thematic data sets have been generated "fault", "earthquake" and "gravimetric" data sets. The fault Dataset has been compiled by examining and merging the available structural maps, and many recent geological and geophysical papers of literature. The earthquake Dataset has been implemented collecting seismic data by the available historical and instrumental Catalogues and new precise earthquake locations for better constraining existence and activity of some outcropping and buried tectonic structures. Seismic data have been standardized in the same format into the GIS and merged in a final catalogue. For the gravimetric Dataset, the Multiscale Derivative Analysis (MDA) of the gravity field of the area has been performed, relying on the good resolution properties of the Enhanced Horizontal Derivative (EHD) (Fedi et al., 2005). MDA of gravity data has allowed localization of several trends identifying anomaly sources whose presence was not previously detected. The main results of our integrated analysis show a strong correlation among faults, hypocentral location of earthquakes and MDA lineaments from gravity data. Furthermore 2D seismic hypocentral locations together with high-resolution analysis of gravity anomalies have been correlated to estimate the fault systems parameters (strike, dip direction and dip angle) of some structures of the areas, through the application of the DEXP method (Fedi M. and M. Pilkington, 2012). References Fedi M., Cella F., Florio G., Rapolla A.; 2005: Multiscale Derivative Analysis of the gravity and magnetic fields of the Southern Apennines (Italy). In: Finetti I.R. (ed), CROP PROJECT: Deep Seismic Exploration of the Central Mediterranean and Italy, pp. 281-318. Fedi M., Pilkington M.; 2012: Understanding imaging methods for potential field data. Geophysics, 77: G13-G24. Gaudiosi G., Alessio G., Cella F., Fedi M., Florio G., Nappi, R.; 2012: Multiparametric data analysis for seismic sources identification in the Campanian area: merging of seismological, structural and gravimetric data. BGTA,. Vol. 53, n. 3, pp. 283-298.
Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.
NASA Astrophysics Data System (ADS)
Caudron, Corentin; White, Robert S.; Green, Robert G.; Woods, Jennifer; Ágústsdóttir, Thorbjörg; Donaldson, Clare; Greenfield, Tim; Rivalta, Eleonora; Brandsdóttir, Bryndís.
2018-01-01
Magma is transported in brittle rock through dikes and sills. This movement may be accompanied by the release of seismic energy that can be tracked from the Earth's surface. Locating dikes and deciphering their dynamics is therefore of prime importance in understanding and potentially forecasting volcanic eruptions. The Seismic Amplitude Ratio Analysis (SARA) method aims to track melt propagation using the amplitudes recorded across a seismic network without picking the arrival times of individual earthquake phases. This study validates this methodology by comparing SARA locations (filtered between 2 and 16 Hz) with the earthquake locations (same frequency band) recorded during the 2014-2015 Bár∂arbunga-Holuhraun dike intrusion and eruption in Iceland. Integrating both approaches also provides the opportunity to investigate the spatiotemporal characteristics of magma migration during the dike intrusion and ensuing eruption. During the intrusion SARA locations correspond remarkably well to the locations of earthquakes. Several exceptions are, however, observed. (1) A low-frequency signal was possibly associated with a subglacial eruption on 23 August. (2) A systematic retreat of the seismicity was also observed to the back of each active segment during stalled phases and was associated with a larger spatial extent of the seismic energy source. This behavior may be controlled by the dike's shape and/or by dike inflation. (3) During the eruption SARA locations consistently focused at the eruptive site. (4) Tremor-rich signal close to ice cauldrons occurred on 3 September. This study demonstrates the power of the SARA methodology, provided robust site amplification; Quality Factors and seismic velocities are available.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
NASA Astrophysics Data System (ADS)
Han, S. M.; Hahm, I.
2015-12-01
We evaluated the background noise level of seismic stations in order to collect the observation data of high quality and produce accurate seismic information. Determining of the background noise level was used PSD (Power Spectral Density) method by McNamara and Buland (2004) in this study. This method that used long-term data is influenced by not only innate electronic noise of sensor and a pulse wave resulting from stabilizing but also missing data and controlled by the specified frequency which is affected by the irregular signals without site characteristics. It is hard and inefficient to implement process that filters out the abnormal signal within the automated system. To solve these problems, we devised a method for extracting the data which normally distributed with 90 to 99% confidence intervals at each period. The availability of the method was verified using 62-seismic stations with broadband and short-period sensors operated by the KMA (Korea Meteorological Administration). Evaluation standards were NHNM (New High Noise Model) and NLNM (New Low Noise Model) published by the USGS (United States Geological Survey). It was designed based on the western United States. However, Korean Peninsula surrounded by the ocean on three sides has a complicated geological structure and a high population density. So, we re-designed an appropriate model in Korean peninsula by statistically combined result. The important feature is that secondary-microseism peak appeared at a higher frequency band. Acknowledgements: This research was carried out as a part of "Research for the Meteorological and Earthquake Observation Technology and Its Application" supported by the 2015 National Institute of Meteorological Research (NIMR) in the Korea Meteorological Administration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlangga, Mokhammad Puput
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less
Effect of Different Groundwater Levels on Seismic Dynamic Response and Failure Mode of Sandy Slope
Huang, Shuai; Lv, Yuejun; Peng, Yanju; Zhang, Lifang; Xiu, Liwei
2015-01-01
Heavy seismic damage tends to occur in slopes when groundwater is present. The main objectives of this paper are to determine the dynamic response and failure mode of sandy slope subjected simultaneously to seismic forces and variable groundwater conditions. This paper applies the finite element method, which is a fast and efficient design tool in modern engineering analysis, to evaluate dynamic response of the slope subjected simultaneously to seismic forces and variable groundwater conditions. Shaking table test is conducted to analyze the failure mode and verify the accuracy of the finite element method results. The research results show that dynamic response values of the slope have different variation rules under near and far field earthquakes. And the damage location and pattern of the slope are different in varying groundwater conditions. The destruction starts at the top of the slope when the slope is in no groundwater, which shows that the slope appears obvious whipping effect under the earthquake. The destruction starts at the toe of the slope when the slope is in the high groundwater levels. Meanwhile, the top of the slope shows obvious seismic subsidence phenomenon after earthquake. Furthermore, the existence of the groundwater has a certain effect of damping. PMID:26560103
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure
1989-04-14
element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial
Dominant seismic sources for the cities in South Sumatra
NASA Astrophysics Data System (ADS)
Sunardi, Bambang; Sakya, Andi Eka; Masturyono, Murjaya, Jaya; Rohadi, Supriyanto; Sulastri, Putra, Ade Surya
2017-07-01
Subduction zone along west of Sumatra and Sumatran fault zone are active seismic sources. Seismotectonically, South Sumatra could be affected by earthquakes triggered by these seismic sources. This paper discussed contribution of each seismic source to earthquake hazards for cities of Palembang, Prabumulih, Banyuasin, OganIlir, Ogan Komering Ilir, South Oku, Musi Rawas and Empat Lawang. These hazards are presented in form of seismic hazard curves. The study was conducted by using Probabilistic Seismic Hazard Analysis (PSHA) of 2% probability of exceedance in 50 years. Seismic sources used in analysis included megathrust zone M2 of Sumatra and South Sumatra, background seismic sources and shallow crustal seismic sources consist of Ketaun, Musi, Manna and Kumering faults. The results of the study showed that for cities relatively far from the seismic sources, subduction / megathrust seismic source with a depth ≤ 50 km greatly contributed to the seismic hazard and the other areas showed deep background seismic sources with a depth of more than 100 km dominate to seismic hazard respectively.
Is 3D true non linear traveltime tomography reasonable ?
NASA Astrophysics Data System (ADS)
Herrero, A.; Virieux, J.
2003-04-01
The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.
Comparison of Shear-wave Profiles for a Compacted Fill in a Geotechnical Test Pit
NASA Astrophysics Data System (ADS)
Sylvain, M. B.; Pando, M. A.; Whelan, M.; Bents, D.; Park, C.; Ogunro, V.
2014-12-01
This paper investigates the use of common methods for geological seismic site characterization including: i) multichannel analysis of surface waves (MASW),ii) crosshole seismic surveys, and iii) seismic cone penetrometer tests. The in-situ tests were performed in a geotechnical test pit located at the University of North Carolina at Charlotte High Bay Laboratory. The test pit has dimensions of 12 feet wide by 12 feet long by 10 feet deep. The pit was filled with a silty sand (SW-SM) soil, which was compacted in lifts using a vibratory plate compactor. The shear wave velocity values from the 3 techniques are compared in terms of magnitude versus depth as well as spatially. The comparison was carried out before and after inducing soil disturbance at controlled locations to evaluate which methods were better suited to captured the induced soil disturbance.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
NASA Astrophysics Data System (ADS)
Liu, Chuncheng; Wang, Chongyang; Mao, Long; Zha, Chuanming
2016-11-01
Substation high voltage electrical equipment such as mutual inductor, circuit interrupter, disconnecting switch, etc., has played a key role in maintaining the normal operation of the power system. When the earthquake disaster, the electrical equipment of the porcelain in the transformer substation is the most easily to damage, causing great economic losses. In this paper, using the method of numerical analysis, the establishment of a typical high voltage electrical equipment of three dimensional finite element model, to study the seismic response of a typical SF6 circuit breaker, at the same time, analysis and contrast the installation ring tuned mass damper (TMD damper for short), by changing the damper damping coefficient and the mass block, install annular TMD vibration control effect is studied. The results of the study for guiding the seismic design of high voltage electrical equipment to provide valuable reference.
NASA Astrophysics Data System (ADS)
Panzera, Francesco; Mignan, Arnaud; Vogfjörð, Kristin S.
2017-07-01
In 1991, a digital seismic monitoring network was installed in Iceland with a digital seismic system and automatic operation. After 20 years of operation, we explore for the first time its nationwide performance by analysing the spatiotemporal variations of the completeness magnitude. We use the Bayesian magnitude of completeness (BMC) method that combines local completeness magnitude observations with prior information based on the density of seismic stations. Additionally, we test the impact of earthquake location uncertainties on the BMC results, by filtering the catalogue using a multivariate analysis that identifies outliers in the hypocentre error distribution. We find that the entire North-to-South active rift zone shows a relatively low magnitude of completeness Mc in the range 0.5-1.0, highlighting the ability of the Icelandic network to detect small earthquakes. This work also demonstrates the influence of earthquake location uncertainties on the spatiotemporal magnitude of completeness analysis.
Schenberg microwave cabling seismic isolation.
NASA Astrophysics Data System (ADS)
Bortoli, F. S.; Frajuca, C.; Aguiar, O. D.
2018-02-01
SCHENBERG is a resonant-mass gravitational wave detector with a frequency about 3.2 kHz. Its spherical antenna, weighing 1.15 metric ton, is connected to the external world by a system which must attenuate seismic noise. When a gravitational wave passes the antenna vibrates, its motion is monitored by transducers. These parametric transducers uses microwaves carried by coaxial cables that are also connected to the external world, they also carry seismic noise. In this analysis the system was modeled using finite element method. This work shows that the addition of masses along these cables can decrease this noise, so that this noise is below the thermal noise of the detector when operating at 50 mK.
Back to the Future: Long-Term Seismic Archives Revisited
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2007-12-01
Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring seismic activity. These archives typically consist of waveforms of seismic events and associated parametric data such as phase arrival time picks and the location of hypocenters. Catalogs of earthquake locations are fundamental data in seismology, and even in the Earth sciences in general. Yet, these locations have notoriously low spatial resolution because of errors in both the picks and the models commonly used to locate events one at a time. This limits their potential to address fundamental questions concerning the physics of earthquakes, the structure and composition of the Earth's interior, and the seismic hazards associated with active faults. We report on the comprehensive use of modern waveform cross-correlation based methodologies for high- resolution earthquake location - as applied to regional and global long-term seismic databases. By simultaneous re-analysis of two decades of the digital seismic archive of Northern California, reducing pick errors via cross-correlation and model errors via double-differencing, we achieve up to three orders of magnitude resolution improvement over existing hypocenter locations. The relocated events image networks of discrete faults at seismogenic depths across various tectonic settings that until now have been hidden in location uncertainties. Similar location improvements are obtained for earthquakes recorded at global networks by re- processing 40 years of parametric data from the ISC and corresponding waveforms archived at IRIS. Since our methods are scaleable and run on inexpensive Beowulf clusters, periodic re-analysis of entire archives may thus become a routine procedure to continuously improve resolution in existing catalogs. We demonstrate the role of seismic archives in obtaining the precise location of new events in real-time. Such information has considerable social and economic impact in the evaluation and mitigation of seismic hazards, for example, and highlights the need for consistent long-term seismic monitoring and archiving of records.
Seismic clusters analysis in Northeastern Italy by the nearest-neighbor approach
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Gentili, Stefania
2018-01-01
The main features of earthquake clusters in Northeastern Italy are explored, with the aim to get new insights on local scale patterns of seismicity in the area. The study is based on a systematic analysis of robustly and uniformly detected seismic clusters, which are identified by a statistical method, based on nearest-neighbor distances of events in the space-time-energy domain. The method permits us to highlight and investigate the internal structure of earthquake sequences, and to differentiate the spatial properties of seismicity according to the different topological features of the clusters structure. To analyze seismicity of Northeastern Italy, we use information from local OGS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. A preliminary reappraisal of the earthquake bulletins is carried out and the area of sufficient completeness is outlined. Various techniques are considered to estimate the scaling parameters that characterize earthquakes occurrence in the region, namely the b-value and the fractal dimension of epicenters distribution, required for the application of the nearest-neighbor technique. Specifically, average robust estimates of the parameters of the Unified Scaling Law for Earthquakes, USLE, are assessed for the whole outlined region and are used to compute the nearest-neighbor distances. Clusters identification by the nearest-neighbor method turn out quite reliable and robust with respect to the minimum magnitude cutoff of the input catalog; the identified clusters are well consistent with those obtained from manual aftershocks identification of selected sequences. We demonstrate that the earthquake clusters have distinct preferred geographic locations, and we identify two areas that differ substantially in the examined clustering properties. Specifically, burst-like sequences are associated with the north-western part and swarm-like sequences with the south-eastern part of the study region. The territorial heterogeneity of earthquakes clustering is in good agreement with spatial variability of scaling parameters identified by the USLE. In particular, the fractal dimension is higher to the west (about 1.2-1.4), suggesting a spatially more distributed seismicity, compared to the eastern parte of the investigated territory, where fractal dimension is very low (about 0.8-1.0).
NASA Astrophysics Data System (ADS)
Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.
2017-12-01
During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.
NASA Astrophysics Data System (ADS)
Krkošková, Katarína; Papán, Daniel; Papánová, Zuzana
2017-10-01
The technical seismicity negatively affects the environment, buildings and structures. Technical seismicity means seismic shakes caused by force impulse, random process and unnatural origin. The vibration influence on buildings is evaluated in the Eurocode 8 in Slovak Republic, however, the Slovak Technical Standard STN 73 0036 includes solution of the technical seismicity. This standard also classes bridges into the group of structures that are significant in light of the technical seismicity - the group “U”. Using the case studies analysis by FEM simulation and comparison is necessary because of brief norm evaluation of this issue. In this article, determinate dynamic parameters by experimental measuring and numerical method on two real bridges are compared. First bridge, (D201 - 00) is Scaffold Bridge on the road I/11 leading to the city of Čadca and is situated in the city of Žilina. It is eleven - span concrete road bridge. The railway is the obstacle, which this bridge spans. Second bridge (M5973 Brodno) is situated in the part of Žilina City on the road of I/11. It is concrete three - span road bridge built as box girder. The computing part includes 3D computational models of the bridges. First bridge (D201 - 00) was modelled in the software of IDA Nexis as the slab - wall model. The model outputs are natural frequencies and natural vibration modes. Second bridge (M5973 Brodno) was modelled in the software of VisualFEA. The technical seismicity corresponds with the force impulse, which was put into this model. The model outputs are vibration displacements, velocities and accelerations. The aim of the experiments was measuring of the vibration acceleration time record of bridges, and there was need to systematic placement of accelerometers. The vibration acceleration time record is important during the under - bridge train crossing, about the first bridge (D201 - 00) and the vibration acceleration time domain is important during deducing the force impulse under the bridge, about second bridge (M5973 Brodno). The analysis was done in the software of Sigview. About the first bridge (D201 - 00), the analysis output were values of power spectral density adherent to the frequencies values. These frequencies were compared with the natural frequencies values from the computational model whereby the technical seismicity influence on bridge natural frequencies was found out. About the second bridge (M5973 Brodno), the Sigview display of recorded vibration velocity time history was compared with the final vibration velocity time history from the computational model, whereby the results were incidental.
NASA Astrophysics Data System (ADS)
Naito, K.; Park, J.
2012-12-01
The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake mechanism. Huge earthquakes have been repeated in the cycle of 100-150 years in the area, and in these days the next emergence of the earthquake becomes one of the most serious issue in Japan. Therefore, detailed descriptions of geological structure are urgently needed there. IODP (Integrated Ocean Drilling Program) have investigated this area in the NanTroSEIZE science plan. Seismic reflection, core sampling and borehole logging surveys have been executed during the NanTroSEIZE expeditions. Core-log-seismic data integration (CLSI) is useful for understanding the Nankai seismogenic zone. We use the seismic inversion method to do the CLSI. The seismic inversion (acoustic impedance inversion, A.I. inversion) is a method to estimate rock physical properties using seismic reflection and logging data. Acoustic impedance volume is inverted for seismic data with density and P-wave velocity of several boreholes with the technique. We use high-resolution 3D multi-channel seismic (MCS) reflection data obtained during KR06-02 cruise in 2006, and measured core sample properties by IODP Expeditions 322 and 333. P-wave velocities missing for some core sample are interpolated by the relationship between acoustic impedance and P-wave velocity. We used Hampson-Russell software for the seismic inversion. 3D porosity model is derived from the 3D acoustic impedance model to figure out rock physical properties of the incoming sedimentary sequence in the Nankai Trough off Kumano Basin. The result of our inversion analysis clearly shows heterogeneity of sediments; relatively high porosity sediments on the shallow layer of Kashinosaki Knoll, and distribution of many physical anomaly bands on volcanic and turbidite sediment layers around the 3D MCS survey area. In this talk, we will show 3D MCS, acoustic impedance, and porosity data for the incoming sedimentary sequence and discuss its possible implications for the Nankai seismogenic behavior.
Finite-Difference Numerical Simulation of Seismic Gradiometry
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Symons, N. P.; Haney, M. M.
2006-12-01
We use the phrase seismic gradiometry to refer to the developing research area involving measurement, modeling, analysis, and interpretation of spatial derivatives (or differences) of a seismic wavefield. In analogy with gradiometric methods used in gravity and magnetic exploration, seismic gradiometry offers the potential for enhancing resolution, and revealing new (or hitherto obscure) information about the subsurface. For example, measurement of pressure and rotation enables the decomposition of recorded seismic data into compressional (P) and shear (S) components. Additionally, a complete observation of the total seismic wavefield at a single receiver (including both rectilinear and rotational motions) offers the possibility of inferring the type, speed, and direction of an incident seismic wave. Spatially extended receiver arrays, conventionally used for such directional and phase speed determinations, may be dispensed with. Seismic wave propagation algorithms based on the explicit, time-domain, finite-difference (FD) numerical method are well-suited for investigating gradiometric effects. We have implemented in our acoustic, elastic, and poroelastic algorithms a point receiver that records the 9 components of the particle velocity gradient tensor. Pressure and particle rotation are obtained by forming particular linear combinations of these tensor components, and integrating with respect to time. All algorithms entail 3D O(2,4) FD solutions of coupled, first- order systems of partial differential equations on uniformly-spaced staggered spatial and temporal grids. Numerical tests with a 1D model composed of homogeneous and isotropic elastic layers show isolation of P, SV, and SH phases recorded in a multiple borehole configuration, even in the case of interfering events. Synthetic traces recorded by geophones and rotation receivers in a shallow crosswell geometry with randomly heterogeneous poroelastic models also illustrate clear P (fast and slow) and S separation. Finally, numerical tests of the "point seismic array" concept are oriented toward understanding its potential and limitations. Sandia National Laboratories is a multiprogram science and engineering facility operated by Sandia Corporation, a Lockheed-Martin company, for the United States Department of Energy under contract DE- AC04-94AL85000.
Seismic joint analysis for non-destructive testing of asphalt and concrete slabs
Ryden, N.; Park, C.B.
2005-01-01
A seismic approach is used to estimate the thickness and elastic stiffness constants of asphalt or concrete slabs. The overall concept of the approach utilizes the robustness of the multichannel seismic method. A multichannel-equivalent data set is compiled from multiple time series recorded from multiple hammer impacts at progressively different offsets from a fixed receiver. This multichannel simulation with one receiver (MSOR) replaces the true multichannel recording in a cost-effective and convenient manner. A recorded data set is first processed to evaluate the shear wave velocity through a wave field transformation, normally used in the multichannel analysis of surface waves (MASW) method, followed by a Lambwave inversion. Then, the same data set is used to evaluate compression wave velocity from a combined processing of the first-arrival picking and a linear regression. Finally, the amplitude spectra of the time series are used to evaluate the thickness by following the concepts utilized in the Impact Echo (IE) method. Due to the powerful signal extraction capabilities ensured by the multichannel processing schemes used, the entire procedure for all three evaluations can be fully automated and results can be obtained directly in the field. A field data set is used to demonstrate the proposed approach.
Oil Sands Characteristics and Time-Lapse and P-SV Seismic Steam Monitoring, Athabasca, Canada
NASA Astrophysics Data System (ADS)
Takahashi, A.; Nakayama, T.; Kashihara, K.; Skinner, L.; Kato, A.
2008-12-01
A vast amount of oil sands exists in the Athabasca area, Alberta, Canada. These oil sands consist of bitumen (extra-heavy oil) and unconsolidated sand distributed from surface to a depth of 750 meters. Including conventional crude oil, the total number of proved remaining oil reserves in Canada ranks second place in the world after Saudi Arabia. For the production of bitumen from the reservoir 200 to 500 meters in depth, the Steam Assisted Gravity Drainage (SAGD) method (Steam Injection EOR) has been adopted as bitumen is not movable at original temperatures. It is essential to understand the detailed reservoir distribution and steam chamber development extent for optimizing the field development. Oil sands reservoir characterization is conducted using 3D seismic data acquired in February 2002. Conducting acoustic impedance inversion to improve resolution and subsequent multi-attribute analysis integrating seismic data with well data facilitates an understanding of the detailed reservoir distribution. These analyses enable the basement shale to be imaged, and enables identification to a certain degree of thin shale within the reservoir. Top and bottom depths of the reservoir are estimated in the range of 2.0 meters near the existing wells even in such a complex channel sands environment characterized by abrupt lateral sedimentary facies changes. In March 2006, monitoring 3D seismic data was acquired to delineate steam-affected areas. The 2002 baseline data is used as a reference data and the 2006 monitoring data is calibrated to the 2002 seismic data. Apparent differences in the two 3D seismic data sets with the exception of production related response changes are removed during the calibration process. P-wave and S-wave velocities of oil sands core samples are also measured with various pressures and temperatures, and the laboratory measurement results are then combined to construct a rock physics model used to predict velocity changes induced by steam-injection. The differences of the seismic responses between the time-lapse seismic volumes can be quantitatively explained by P-wave velocity decrease of the oil sands layers due to steam-injection. In addition, the data suggests that a larger area would be influenced by pressure than temperature. We calculate several seismic attributes such as RMS values of amplitude difference, maximum cross correlations, and interval velocity differences. These attributes are integrated by using self-organization maps (SOM) and K-means methods. By this analysis, we are able to distinguish areas of steam chamber growth from transitional and non-affected areas. In addition, 3D P-SV converted-wave processing and analysis are applied on the second 3D data set (recorded with three-component digital sensor). Low Vp/Vs values in the P-SV volume show areas of steam chamber development, and high Vp/Vs values indicate transitional zones. Our analysis of both time-lapse 3D seismic and 3D P-SV data along with the rock physics model can be used to monitor qualitatively and quantitatively the rock property changes of the inter-well reservoir sands in the field.
Worldwide seismicity in view of non-extensive statistical physics
NASA Astrophysics Data System (ADS)
Chochlaki, Kaliopi; Vallianatos, Filippos; Michas, George
2014-05-01
In the present work we study the distribution of worldwide shallow seismic events occurred from 1981 to 2011 extracted from the CMT catalog, with magnitude equal or greater than Mw 5.0. Our analysis based on the subdivision of the Earth surface into seismic zones that are homogeneous with regards to seismic activity and orientation of the predominant stress field. To this direction we use the Flinn-Engdahl regionalization (Flinn and Engdahl, 1965), which consists of 50 seismic zones as modified by Lombardi and Marzocchi (2007), where grouped the 50 FE zones into larger tectonically homogeneous ones, utilizing the cumulative moment tensor method. As a result Lombardi and Marzocchi (2007), limit the initial 50 regions to 39 ones, in which we apply the non- extensive statistical physics approach. The non-extensive statistical physics seems to be the most adequate and promising methodological tool for analyzing complex systems, such as the Earth's interior. In this frame, we introduce the q-exponential formulation as the expression of probability distribution function that maximizes the Sq entropy as defined by Tsallis, (1988). In the present work we analyze the interevent time distribution between successive earthquakes by a q-exponential function in each of the seismic zones defined by Lombardi and Marzocchi (2007).confirming the importance of long-range interactions and the existence of a power-law approximation in the distribution of the interevent times. Our findings supports the ideas of universality within the Tsallis approach to describe Earth's seismicity and present strong evidence on temporal clustering of seismic activity in each of the tectonic zones analyzed. Our analysis as applied in worldwide seismicity with magnitude equal or greater than Mw 5.5 and 6.) is presented and the dependence of our result on the cut-off magnitude is discussed. This research has been funded by the European Union (European Social Fund) and Greek national resources under the framework of the "THALES Program: SEISMO FEAR HELLARC" project of the "Education & Lifelong Learning" Operational Programme.
Seismic surveys test on Innerhytta Pingo, Adventdalen, Svalbard Islands
NASA Astrophysics Data System (ADS)
Boaga, Jacopo; Rossi, Giuliana; Petronio, Lorenzo; Accaino, Flavio; Romeo, Roberto; Wheeler, Walter
2015-04-01
We present the preliminary results of an experimental full-wave seismic survey test conducted on the Innnerhytta a Pingo, located in the Adventdalen, Svalbard Islands, Norway. Several seismic surveys were adopted in order to study a Pingo inner structure, from classical reflection/refraction arrays to seismic tomography and surface waves analysis. The aim of the project IMPERVIA, funded by Italian PNRA, was the evaluation of the permafrost characteristics beneath this open-system Pingo by the use of seismic investigation, evaluating the best practice in terms of logistic deployment. The survey was done in April-May 2014: we collected 3 seismic lines with different spacing between receivers (from 2.5m to 5m), for a total length of more than 1 km. We collected data with different vertical geophones (with natural frequency of 4.5 Hz and 14 Hz) as well as with a seismic snow-streamer. We tested different seismic sources (hammer, seismic gun, fire crackers and heavy weight drop), and we verified accurately geophone coupling in order to evaluate the different responses. In such peculiar conditions we noted as fire-crackers allow the best signal to noise ratio for refraction/reflection surveys. To ensure the best geophones coupling with the frozen soil, we dug snow pits, to remove the snow-cover effect. On the other hand, for the surface wave methods, the very high velocity of the permafrost strongly limits the generation of long wavelengths both with these explosive sources as with the common sledgehammer. The only source capable of generating low frequencies was a heavy drop weight system, which allows to analyze surface wave dispersion below 10 Hz. Preliminary data analysis results evidence marked velocity inversions and strong velocity contrasts in depth. The combined use of surface and body waves highlights the presence of a heterogeneous soil deposit level beneath a thick layer of permafrost. This is the level that hosts the water circulation from depth controlling the Pingo structure evolution.
Lane, John W.; Ivanov, Julian M.; Day-Lewis, Frederick D.; Clemens, Drew; Patev, Robert; Miller, Richard D.
2008-01-01
The utility of the multi‐channel analysis of surface waves (MASW) seismic method for non‐invasive assessment of earthen levees was evaluated for a section of the Citrus Lakefront Levee, New Orleans, Louisiana. This test was conducted after the New Orleans' area levee system had been stressed by Hurricane Katrina in 2005. The MASW data were acquired in a seismically noisy, urban environment using an accelerated weight‐drop seismic source and a towed seismic land streamer. Much of the seismic data were contaminated with higher‐order mode guided‐waves, requiring application of muting filtering techniques to improve interpretability of the dispersion curves. Comparison of shear‐wave velocity sections with boring logs suggests the existence of four distinct horizontal layers within and beneath the levee: (1) the levee core, (2) the levee basal layer of fat clay, (3) a sublevel layer of silty sand, and (4) underlying Pleistocene deposits of sandy lean clay. Along the surveyed section of levee, lateral variations in shear‐wave velocity are interpreted as changes in material rigidity, suggestive of construction or geologic heterogeneity, or possibly, that dynamic processes (such as differential settlement) are affecting discrete levee areas. The results of this study suggest that the MASW method is a geophysical tool with significant potential for non‐invasive characterization of vertical and horizontal variations in levee material shear strength. Additional work, however, is needed to fully understand and address the complex seismic wave propagation in levee structures.
Crustal Fracturing Field and Presence of Fluid as Revealed by Seismic Anisotropy
NASA Astrophysics Data System (ADS)
Pastori, M.; Piccinini, D.; de Gori, P.; Margheriti, L.; Barchi, M. R.; di Bucci, D.
2010-12-01
In the last three years, we developed, tested and improved an automatic analysis code (Anisomat+) to calculate the shear wave splitting parameters, fast polarization direction (φ) and delay time (∂t). The code is a set of MatLab scripts able to retrieve crustal anisotropy parameters from three-component seismic recording of local earthquakes using horizontal component cross-correlation method. The analysis procedure consists in choosing an appropriate frequency range, that better highlights the signal containing the shear waves, and a length of time window on the seismogram centered on the S arrival (the temporal window contains at least one cycle of S wave). The code was compared to other two automatic analysis code (SPY and SHEBA) and tested on three Italian areas (Val d’Agri, Tiber Valley and L’Aquila surrounding) along the Apennine mountains. For each region we used the anisotropic parameters resulting from the automatic computation as a tool to determine the fracture field geometries connected with the active stress field. We compare the temporal variations of anisotropic parameters to the evolution of vp/vs ratio for the same seismicity. The anisotropic fast directions are used to define the active stress field (EDA model), finding a general consistence between fast direction and main stress indicators (focal mechanism and borehole break-out). The magnitude of delay time is used to define the fracture field intensity finding higher value in the volume where micro-seismicity occurs. Furthermore we studied temporal variations of anisotropic parameters and vp/vs ratio in order to explain if fluids play an important role in the earthquake generation process. The close association of anisotropic and vp/vs parameters variations and seismicity rate changes supports the hypothesis that the background seismicity is influenced by the fluctuation of pore fluid pressure in the rocks.
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
NASA Astrophysics Data System (ADS)
Gross, L.; Shaw, S.
2016-04-01
Mapping the horizontal distribution of permeability is a key problem for the coal seam gas industry. Poststack seismic data with anisotropy attributes provide estimates for fracture density and orientation which are then interpreted in terms of permeability. This approach delivers an indirect measure of permeability and can fail if other sources of anisotropy (for instance stress) come into play. Seismo-electric methods, based on recording the electric signal from pore fluid movements stimulated through a seismic wave, measure permeability directly. In this paper we use numerical simulations to demonstrate that the seismo-electric method is potentially suitable to map the horizontal distribution of permeability changes across coal seams. We propose the use of an amplitude to offset (AVO) analysis of the electrical signal in combination with poststack seismic data collected during the exploration phase. Recording of electrical signals from a simple seismic source can be closer to production planning and operations. The numerical model is based on a sonic wave propagation model under the low frequency, saturated media assumption and uses a coupled high order spectral element and low order finite element solver. We investigate the impact of seam thickness, coal seam layering, layering in the overburden and horizontal heterogeneity of permeability.
NASA Astrophysics Data System (ADS)
Williams, D. M.; Lopez, A. M.; Huerfano, V.; Lugo, J.; Cancel, J.
2011-12-01
Seismic networks need quick and efficient ways to obtain information related to seismic events for the purposes of seismic activity monitoring, risk assessment, and scientific knowledge among others. As part of an IRIS summer internship program, two projects were performed to provide a tool for quick faulting mechanism and improve seismic data at the Puerto Rico Seismic Network (PRSN). First, a simple routine to obtain a focal mechanisms, the geometry of the fault, based on first motions was developed and implemented for data analysts routine operations at PRSN. The new tool provides the analyst a quick way to assess the probable faulting mechanism that occurred while performing the interactive earthquake location procedure. The focal mechanism is generated on-the-fly when data analysts pick P wave arrivals onsets and motions. Once first motions have been identified, an in-house PRSN utility is employed to obtain the double couple representation and later plotted using GMT's psmeca utility. Second, we addressed the issue of seismic noise related to thermal fluctuations inside seismic vaults. Seismic sites can be extremely noisy due to proximity to cultural activities and unattended thermal fluctuations inside sensor housings, thus resulting in skewed readings. In the past, seismologists have used different insulation techniques to reduce the amount of unwanted noise that a seismometers experience due to these thermal changes with items such as Styrofoam, and fiber glass among others. PRSN traditionally uses Styrofoam boxes to cover their seismic sensors, however, a proper procedure to test how these method compare to other new techniques has never been approached. The deficiency of properly testing these techniques in the Caribbean and especially Puerto Rico is that these thermal fluctuations still happen because of the intense sun and humidity. We conducted a test based on the methods employed by the IRIS Transportable Array, based on insulation by sand burial of the sensor. Two Guralps CMG-3T's connected to RefTek's 150 digitizers were used at PRSN's MPR site seismic vault to compare the two types of insulation. Two temperature loggers were placed along each seismic sensor for a period of one week to observe how much thermal fluctuations occur in each insulation method and then compared its capability for noise reduction due to thermal fluctuations. With only a single degree Celsius fluctuation inside the sand (compared to almost twice that value for the foam) the sensor buried in sand provided the best insulation for the seismic vault. In addition, the quality of the data was analyzed by comparing both sensors using PQLX. We show results of this analysis and also provide a site characteristic of new stations to be included in the daily earthquake location operations at the PRSN.
Julian, B.R.; Prisk, A.; Foulger, G.R.; Evans, J.R.; ,
1993-01-01
Local earthquake tomography - the use of earthquake signals to form a 3-dimensional structural image - is now a mature geophysical analysis method, particularly suited to the study of geothermal reservoirs, which are often seismically active and severely laterally inhomogeneous. Studies have been conducted of the Hengill (Iceland), Krafla (Iceland) and The Geysers (California) geothermal areas. All three systems are exploited for electricity and/or heat production, and all are highly seismically active. Tomographic studies of volumes a few km in dimension were conducted for each area using the method of Thurber (1983).
Putting the slab back: First steps of creating a synthetic seismic section of subducted lithosphere
NASA Astrophysics Data System (ADS)
Zertani, S.; John, T.; Tilmann, F. J.; Leiss, B.; Labrousse, L.; Andersen, T. B.
2016-12-01
Imaging subducted lithosphere is a difficult task which is usually tackled with geophysical methods. To date, the most promising method is receiver function imaging (RF), which concentrates on first order conversions from p- to s-waves at boundaries (e.g. lithological and structural) with contrasting seismic velocities. The resolution is high for the upper parts of the subducting material. However, in greater depths (40-80 km) the visualization of the subducted slab becomes increasingly blurry, until the slab cannot be distinguished from Earth's mantle anymore, rendering a visualization impossible. This blurry zone is thought to occur due to advancing eclogitization of the subducting slab. However, it is not well understood how micro- to macro-scale structures related to progressive eclogitization affect RF signals. The island of Holsnoy in the Bergen Arcs of western Norway represents a partially eclogitized formerly subducted block of lower crust and serves as an analogue to the aforementioned blurry zone in RF images. This eclogitization can be observed in static fluid induced eclogitization patches or fingers, but is mainly present in localized shear zones of variable sizes (mm to 100s of meters). We mapped the area to gain a better understanding of the geometries of such shear zones, which could possibly function as seismic reflectors. Further, we calculated seismic velocities from thermodynamic modelling on the basis of XRF whole rock analysis and compared these results to velocities calculated from a combination of thin section information, EMPA and physical mineral properties (Voigt-Reuss-Hill averaging). Both methods yield consistent results for p- and s-wave velocities of eclogites and granulites from Holsnoy. In combination with X-ray measurements to identify the microtextures of the characteristic samples to incorporate seismic anisotropy caused by e.g. foliation or lineation, these seismic velocities are used as an input for seismic models to reconstruct the progressive eclogitization of a subducting slab as seen in many RF-images (i.e. blurry zone).
The Investigation of a Sinkhole Area in Germany by Near-Surface Active Seismic Tomography
NASA Astrophysics Data System (ADS)
Tschache, S.; Becker, D.; Wadas, S. H.; Polom, U.; Krawczyk, C. M.
2017-12-01
In November 2010, a 30 m wide and 17 m deep sinkhole occurred in a residential area of Schmalkalden, Germany, which fortunately did not harm humans, but led to damage of buildings and property. Subsequent geoscientific investigations showed that the collapse was naturally caused by the subrosion of sulfates in a depth of about 80 m. In 2012, an early warning system was established including 3C borehole geophones deployed in 50 m depth around the backfilled sinkhole. During the acquisition of two shallow 2D shear wave seismic profiles, the signals generated by a micro-vibrator at the surface were additionally recorded by the four borehole geophones of the early warning system and a VSP probe in a fifth borehole. The travel time analysis of the direct arrivals enhanced the understanding of wave propagation in the area. Seismic velocity anomalies were detected and related to structural seismic images of the 2D profiles. Due to the promising first results, the experiment was further extended by distributing vibration points throughout the whole area around the sinkhole. This time, micro-vibrators for P- and S-wave generation were used. The signals were recorded by the borehole geophones and temporary installed seismometers at surface positions close to the boreholes. The travel times and signal attenuations are evaluated to detect potential instable zones. Furthermore, array analyses are performed. The first results reveal features in the active tomography datasets consistent with structures observed in the 2D seismic images. The advantages of the presented method are the low effort and good repeatability due to the permanently installed borehole geophones. It has the potential to determine P-wave and S-wave velocities in 3D. It supports the interpretation of established investigation methods as 2D surface seismics and VSP. In our further research we propose to evaluate the suitability of the method for the time lapse monitoring of changes in the seismic wave propagation, which could be related to subrosion processes.
Analysis and Simulation of Far-Field Seismic Data from the Source Physics Experiment
2012-09-01
ANALYSIS AND SIMULATION OF FAR-FIELD SEISMIC DATA FROM THE SOURCE PHYSICS EXPERIMENT Arben Pitarka, Robert J. Mellors, Arthur J. Rodgers, Sean...Security Site (NNSS) provides new data for investigating the excitation and propagation of seismic waves generated by buried explosions. A particular... seismic model. The 3D seismic model includes surface topography. It is based on regional geological data, with material properties constrained by shallow
NASA Astrophysics Data System (ADS)
Torres-Verdin, C.
2007-05-01
This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.
NASA Astrophysics Data System (ADS)
Matuła, Rafał; Lewińska, Paulina
2018-01-01
This paper revolves around newly designed and constructed system that can make 2D seismic measurement in natural, subsoil conditions and role of land survey in obtaining accurate results and linking them to 3D surface maps. A new type of land streamer, designed for shallow subsurface exploration is described in this paper. In land seismic data acquisition methods a vehicle tows a line of seismic cable, lying on construction called streamer. The measurements of points and shots are taken while the line is stationary, arbitrary placed on seismic profile. Exposed land streamer consists of 24 innovatory gimballed 10 Hz geophones. It eliminates the need for hand `planting' of geophones, reducing time and costs. With the use of current survey techniques all data obtained with this instrument are being transferred in to 2D and 3D maps. This process is becoming more automatic.
High-resolution lithospheric imaging with seismic interferometry
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; Campman, Xander; Draganov, Deyan; Wapenaar, Kees
2010-10-01
In recent years, there has been an increase in the deployment of relatively dense arrays of seismic stations. The availability of spatially densely sampled global and regional seismic data has stimulated the adoption of industry-style imaging algorithms applied to converted- and scattered-wave energy from distant earthquakes, leading to relatively high-resolution images of the lower crust and upper mantle. We use seismic interferometry to extract reflection responses from the coda of transmitted energy from distant earthquakes. In theory, higher-resolution images can be obtained when migrating reflections obtained with seismic interferometry rather than with conversions, traditionally used in lithospheric imaging methods. Moreover, reflection data allow the straightforward application of algorithms previously developed in exploration seismology. In particular, the availability of reflection data allows us to extract from it a velocity model using standard multichannel data-processing methods. However, the success of our approach relies mainly on a favourable distribution of earthquakes. In this paper, we investigate how the quality of the reflection response obtained with interferometry is influenced by the distribution of earthquakes and the complexity of the transmitted wavefields. Our analysis shows that a reasonable reflection response could be extracted if (1) the array is approximately aligned with an active zone of earthquakes, (2) different phase responses are used to gather adequate angular illumination of the array and (3) the illumination directions are properly accounted for during processing. We illustrate our analysis using a synthetic data set with similar illumination and source-side reverberation characteristics as field data recorded during the 2000-2001 Laramie broad-band experiment. Finally, we apply our method to the Laramie data, retrieving reflection data. We extract a 2-D velocity model from the reflections and use this model to migrate the data. On the final reflectivity image, we observe a discontinuity in the reflections. We interpret this discontinuity as the Cheyenne Belt, a suture zone between Archean and Proterozoic terranes.
NASA Astrophysics Data System (ADS)
Pratama Wahyu Hidayat, Putra; Hary Murti, Antonius; Sudarmaji; Shirly, Agung; Tiofan, Bani; Damayanti, Shinta
2018-03-01
Geometry is an important parameter for the field of hydrocarbon exploration and exploitation, it has significant effect to the amount of resources or reserves, rock spreading, and risk analysis. The existence of geological structure or fault becomes one factor affecting geometry. This study is conducted as an effort to enhance seismic image quality in faults dominated area namely offshore Madura Strait. For the past 10 years, Oligo-Miocene carbonate rock has been slightly explored on Madura Strait area, the main reason because migration and trap geometry still became risks to be concern. This study tries to determine the boundary of each fault zone as subsurface image generated by converting seismic data into variance attribute. Variance attribute is a multitrace seismic attribute as the derivative result from amplitude seismic data. The result of this study shows variance section of Madura Strait area having zero (0) value for seismic continuity and one (1) value for discontinuity of seismic data. Variance section shows the boundary of RMKS fault zone with Kendeng zone distinctly. Geological structure and subsurface geometry for Oligo-Miocene carbonate rock could be identified perfectly using this method. Generally structure interpretation to identify the boundary of fault zones could be good determined by variance attribute.
NASA Astrophysics Data System (ADS)
Taisne, B.; Caudron, C.; Kugaenko, Y.; Saltykov, V.
2015-12-01
In contrast of the 1975-76 Tolbachik eruption, the 2012-2013 Tolbachik eruption was not preceded by any striking change in seismic activity. By processing the Klyuchevskoy volcano group seismic data with the Seismic Amplitude Ratio Analysis (SARA) method, we gain insights into the dynamics of magma transfer prior to this important eruption. We highlighted a clear migration of the source of the microseismicity within the seismic swarm, starting 20 hours before the reported eruption onset (05:15 UTC, 26 November 2012). This migration proceeded in different phases and ended when eruptive tremor, corresponding to lava extrusion, was recorded (at ~11:00 UTC, 27 November 2012). In order to get a first order approximation of the location of the magma, we compare the calculated seismic intensity ratios with the theoretical ones. As expected, the observations suggest a migration toward the eruptive vent. However, we explain the pre-eruptive observed ratios by a vertical migration under the northern slope of Plosky Tolbachik volcano that would interact at shallower depth with an intermediate storage region and initiate the lateral migration toward the eruptive vents. Another migration is also captured by this technique and coincides with a seismic swarm that started 16-20 km to the south of Plosky Tolbachik at 20:31 UTC on November 28 and lasted for more than 2 days. This seismic swarm is very similar to the seismicity preceding the 1975-76 Tolbachik eruption and can be considered as a possible aborted eruption.
Obermeier, S.F.
1996-01-01
Liquefaction features can be used in many field settings to estimate the recurrence interval and magnitude of strong earthquakes through much of the Holocene. These features include dikes, craters, vented sand, sills, and laterally spreading landslides. The relatively high seismic shaking level required for their formation makes them particularly valuable as records of strong paleo-earthquakes. This state-of-the-art summary for using liquefaction-induced features for paleoseismic interpretation and analysis takes into account both geological and geotechnical engineering perspectives. The driving mechanism for formation of the features is primarily the increased pore-water pressure associated with liquefaction of sand-rich sediment. The role of this mechanism is often supplemented greatly by the direct action of seismic shaking at the ground surface, which strains and breaks the clay-rich cap that lies immediately above the sediment that liquefied. Discussed in the text are the processes involved in formation of the features, as well as their morphology and characteristics in field settings. Whether liquefaction occurs is controlled mainly by sediment grain size, sediment packing, depth to the water table, and strength and duration of seismic shaking. Formation of recognizable features in the field generally requires a low-permeability cap above the sediment that liquefied. Field manifestations are controlled largely by the severity of liquefaction and the thickness and properties of the low-permeability cap. Criteria are presented for determining whether observed sediment deformation in the field originated by seismically induced liquefaction. These criteria have been developed mainly by observing historic effects of liquefaction in varied field settings. The most important criterion is that a seismic liquefaction origin requires widespread, regional development of features around a core area where the effects are most severe. In addition, the features must have a morphology that is consistent with a very sudden application of a large hydraulic force. This article discusses case studies in widely separated and different geological settings: coastal South Carolina, the New Madrid seismic zone, the Wabash Valley seismic zone, and coastal Washington State. These studies encompass most of the range of settings and the types of liquefaction-induced features likely to be encountered anywhere. The case studies describe the observed features and the logic for assigning a seismic liquefaction origin to them. Also discussed are some types of sediment deformations that can be misinterpreted as having a seismic origin. Two independent methods for estimating prehistoric magnitude are discussed briefly. One method is based on determination of the maximum distance from the epicenter over which liquefaction-induced effects have formed. The other method is based on use of geotechnical engineering techniques at sites of marginal liquefaction, in order to bracket the peak accelerations as a function of epicentral distance; these accelerations can then be compared with predictions from seismological models.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Earthquake Complex Network applied along the Chilean Subduction Zone.
NASA Astrophysics Data System (ADS)
Martin, F.; Pasten, D.; Comte, D.
2017-12-01
In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.
Geophysics in Mejillones Basin, Chile: Dynamic analysis and associatedseismic hazard
NASA Astrophysics Data System (ADS)
Maringue, J. I.; Yanez, G. A.; Lira, E.; Podestá, L., Sr.; Figueroa, R.; Estay, N. P.; Saez, E.
2016-12-01
The active margin of South America has a high seismogenic potential. In particular, the Mejillones peninsula, located in northern Chile, represents a site of interest for seismic hazard due to 100-year seismic gap, the potentially large site effects, and the presence of the most important port in the region. We perform a dynamic analysis of the zone from a spatial and petrophysical model of the Mejillones Basin, to understand its behavior under realistic seismic scenarios. Geometry and petrophysics of the basin were obtained from an integrated modeling of geophysics observations (gravity, seismic and electromagnetic data) distributed mainly in Pampa Mejillones whose western edge is limited by Mejillones Fault, oriented north-south. This regional-scale normal fault shows a half-graben geometry which controls the development of the Mejillones basin eastwards. The gravimetric and magnetotelluric methods allow to define the geometry of the basin, through a cover/basement density contrast, and the transition zone from very low-moderate electrical resistivities, respectively. The seismic method complements the petrophysics in terms of the shear wave depth profile. The results show soil's thicknesses up to 700 meters on deeper zone, with steeper slopes to the west and lower slopes to the east, in agreement with the normal-fault-half-graben basin geometry. Along the N-S direction there are not great differences in basin depth, comprising an almost 2D problem. In terms of petrophysics, the sedimentary stratum is characterized by shear velocities between 300-700 m/s, extremely low electrical resistivities, below 1 ohm-m, and densities from 1.4 to 1.8 gr/cc. The numerical simulation of the seismic waves amplification gives values in the order of 0.8g, which implying large surface damages. The results demonstrate a potential risk in Mejillones bay to future events, therefore is very important to generate mitigations policies for infrastructure and human settlements.
Impact of magnitude uncertainties on seismic catalogue properties
NASA Astrophysics Data System (ADS)
Leptokaropoulos, K. M.; Adamaki, A. K.; Roberts, R. G.; Gkarlaouni, C. G.; Paradisopoulou, P. M.
2018-05-01
Catalogue-based studies are of central importance in seismological research, to investigate the temporal, spatial and size distribution of earthquakes in specified study areas. Methods for estimating the fundamental catalogue parameters like the Gutenberg-Richter (G-R) b-value and the completeness magnitude (Mc) are well established and routinely applied. However, the magnitudes reported in seismicity catalogues contain measurement uncertainties which may significantly distort the estimation of the derived parameters. In this study, we use numerical simulations of synthetic data sets to assess the reliability of different methods for determining b-value and Mc, assuming the G-R law validity. After contaminating the synthetic catalogues with Gaussian noise (with selected standard deviations), the analysis is performed for numerous data sets of different sample size (N). The noise introduced to the data generally leads to a systematic overestimation of magnitudes close to and above Mc. This fact causes an increase of the average number of events above Mc, which in turn leads to an apparent decrease of the b-value. This may result to a significant overestimation of seismicity rate even well above the actual completeness level. The b-value can in general be reliably estimated even for relatively small data sets (N < 1000) when only magnitudes higher than the actual completeness level are used. Nevertheless, a correction of the total number of events belonging in each magnitude class (i.e. 0.1 unit) should be considered, to deal with the magnitude uncertainty effect. Because magnitude uncertainties (here with the form of Gaussian noise) are inevitable in all instrumental catalogues, this finding is fundamental for seismicity rate and seismic hazard assessment analyses. Also important is that for some data analyses significant bias cannot necessarily be avoided by choosing a high Mc value for analysis. In such cases, there may be a risk of severe miscalculation of seismicity rate regardless the selected magnitude threshold, unless possible bias is properly assessed.
Mini-Sosie high-resolution seismic method aids hazards studies
Stephenson, W.J.; Odum, J.; Shedlock, K.M.; Pratt, T.L.; Williams, R.A.
1992-01-01
The Mini-Sosie high-resolution seismic method has been effective in imaging shallow-structure and stratigraphic features that aid in seismic-hazard and neotectonic studies. The method is not an alternative to Vibroseis acquisition for large-scale studies. However, it has two major advantages over Vibroseis as it is being used by the USGS in its seismic-hazards program. First, the sources are extremely portable and can be used in both rural and urban environments. Second, the shifting-and-summation process during acquisition improves the signal-to-noise ratio and cancels out seismic noise sources such as cars and pedestrians. -from Authors
Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Jeff
2012-11-30
Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have beenmore » the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.« less
1991-03-21
discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were
NASA Astrophysics Data System (ADS)
Huang, Duruo; Du, Wenqi; Zhu, Hong
2017-10-01
In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.
SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.
2013-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.
78 FR 13911 - Proposed Revision to Design of Structures, Components, Equipment and Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... Analysis Reports for Nuclear Power Plants: LWR Edition,'' Section 3.7.1, ``Seismic Design Parameters,'' Section 3.7.2, ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1... and analysis issues, (2) updates to review interfaces to improve the efficiency and consistency of...
Seismic hazard assessment and pattern recognition of earthquake prone areas in the Po Plain (Italy)
NASA Astrophysics Data System (ADS)
Gorshkov, Alexander; Peresan, Antonella; Soloviev, Alexander; Panza, Giuliano F.
2014-05-01
A systematic and quantitative assessment, capable of providing first-order consistent information about the sites where large earthquakes may occur, is crucial for the knowledgeable seismic hazard evaluation. The methodology for the pattern recognition of areas prone to large earthquakes is based on the morphostructural zoning method (MSZ), which employs topographic data and present-day tectonic structures for the mapping of earthquake-controlling structures (i.e. the nodes formed around lineaments intersections) and does not require the knowledge about past seismicity. The nodes are assumed to be characterized by a uniform set of topographic, geologic, and geophysical parameters; on the basis of such parameters the pattern recognition algorithm defines a classification rule to discriminate seismogenic and non-seismogenic nodes. This methodology has been successfully applied since the early 1970s in a number of regions worldwide, including California, where it permitted the identification of areas that have been subsequently struck by strong events and that previously were not considered prone to strong earthquakes. Recent studies on the Iberian Peninsula and the Rhone Valley, have demonstrated the applicability of MSZ to flat basins, with a relatively flat topography. In this study, the analysis is applied to the Po Plain (Northern Italy), an area characterized by a flat topography, to allow for the systematic identification of the nodes prone to earthquakes with magnitude larger or equal to M=5.0. The MSZ method differs from the standard morphostructural analysis where the term "lineament" is used to define the complex of alignments detectable on topographic maps or on satellite images. According to that definition the lineament is locally defined and the existence of the lineament does not depend on the surrounding areas. In MSZ, the primary element is the block - a relatively homogeneous area - while the lineament is a secondary element of the morphostructure. The identified earthquake prone areas provide first-order systematic information that may significantly contribute to seismic hazard assessment in the Italian territory. The information about the possible location of strong earthquakes provided by the morphostructural analysis, in fact, can be naturally incorporated in the neo-deterministic procedure for seismic hazard assessment (NDSHA), so as to fill in possible gaps in known seismicity. Moreover, the space information about earthquake prone areas can be fruitfully combined with the space-time information provided by the quantitative analysis of the seismic flow, so as to identify the priority areas (with linear dimensions of few tens kilometers), where the probability of a strong earthquake is relatively high, for detailed local scale studies. The new indications about the seismogenic potential obtained from this study, although less accurate than detailed fault studies, have the advantage of being independent on past seismicity information, since they rely on the systematic and quantitative analysis of the available geological and morphostructural data. Thus, this analysis appears particularly useful in areas where historical information is scarce; special attention should be paid to seismogenic nodes that are not related with known active faults or past earthquakes.
NASA Astrophysics Data System (ADS)
Garcia, A.; Berrocoso, M.; Marrero, J. M.; Ortiz, R.
2012-04-01
The FFM (Failure Forecast Method) is developed from the eruption of St. Helens, being repeatedly applied to forecast eruptions and recently to the prediction of seismic activity in active volcanic areas. The underwater eruption of El Hierro Island has been monitored from three months before starting (October 10, 2011). This allowed a large catalogue of seismic events (over 11000) and continuous recording seismic signals that cover the entire period. Since the beginning of the seismic-volcanic crisis (July 2011), the FFM was applied to the SSEM signal of seismic records. Mainly because El Hierro is a very small island, the SSEM has a high noise (traffic and oceanic noise). To improve the signal / noise ratio has been used a Kalman filter. The Kalman filter coefficients are adjusted using an inversion process based on forecasting errors occurred in the twenty days preceding. The application of this filter has been a significant improvement in the reliability of forecasts. The analysis of the results shows, before the start of the eruption, that 90% of the forecasts are obtained with errors less than 10 minutes with more than 24 hours in advance. It is noteworthy that the method predicts the events of greater magnitude and especially the beginning of each swarm of seismic events. At the time the eruption starts reducing the efficiency of the forecast 50% with a dispersion of more than one hour. This fact is probably due to decreased detectability by saturation of some of the seismic stations and decreased the average magnitude. However, the events of magnitude greater than 4 were predicted with an error less than 20 minutes.
Measuring the seismic velocity in the top 15 km of Earth's inner core
NASA Astrophysics Data System (ADS)
Godwin, Harriet; Waszek, Lauren; Deuss, Arwen
2018-01-01
We present seismic observations of the uppermost layer of the inner core. This was formed most recently, thus its seismic features are related to current solidification processes. Previous studies have only constrained the east-west hemispherical seismic velocity structure in the Earth's inner core at depths greater than 15 km below the inner core boundary. The properties of shallower structure have not yet been determined, because the seismic waves PKIKP and PKiKP used for differential travel time analysis arrive close together and start to interfere. Here, we present a method to make differential travel time measurements for waves that turn in the top 15 km of the inner core, and measure the corresponding seismic velocity anomalies. We achieve this by generating synthetic seismograms to model the overlapping signals of the inner core phase PKIKP and the inner core boundary phase PKiKP. We then use a waveform comparison to attribute different parts of the signal to each phase. By measuring the same parts of the signal in both observed and synthetic data, we are able to calculate differential travel time residuals. We apply our method to data with ray paths which traverse the Pacific hemisphere boundary. We generate a velocity model for this region, finding lower velocity for deeper, more easterly ray paths. Forward modelling suggests that this region contains either a high velocity upper layer, or variation in the location of the hemisphere boundary with depth and/or latitude. Our study presents the first direct seismic observation of the uppermost 15 km of the inner core, opening new possibilities for further investigating the inner core boundary region.
NASA Astrophysics Data System (ADS)
Matcharashvili, Teimuraz N.; Chelidze, Tamaz L.; Zhukova, Natalia N.
2015-07-01
Investigation of dynamical features of the seismic process as well as the possible influence of different natural and man-made impacts on it remains one of the main interdisciplinary research challenges. The question of external influences (forcings) acquires new importance in the light of known facts on possible essential changes, which occur in the behavior of complex systems due to different relatively weak external impacts. Seismic processes in the complicated tectonic system are not an exclusion from this general rule. In the present research we continued the investigation of dynamical features of seismic activity in Central Asia around the Bishkek (Kyrgyzstan) test area, where strong electromagnetic (EM) soundings were performed in the 1980s. The unexpected result of these experiments was that they revealed the impact of strong electromagnetic discharges on the microseismic activity of investigated area. We used an earthquake catalogue of this area to investigate dynamical features of seismic activity in periods before, during, and after the mentioned man-made EM forcings. Different methods of modern time series analysis have been used, such as wavelet transformation, Hilbert Huang transformation, detrended fluctuation analysis, and recurrence quantification analysis. Namely, inter-event (waiting) time intervals, inter-earthquake distances and magnitude sequences, as well as time series of the number of daily occurring earthquakes have been analyzed. We concluded that man-made high-energy EM irradiation essentially affects dynamics of the seismic process in the investigated area in its temporal and spatial domains; namely, the extent of order in earthquake time and space distribution increase. At the same time, EM influence on the energetic distribution is not clear from the present analysis. It was also shown that the influence of EM impulses on dynamical features of seismicity differs in different areas of the examined territory around the test site. Clear changes have been indicated only in areas which, according to previous researches, have been characterized by anomalous increase of average rates of strain release and thus can be regarded as close to the critical state.
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
Spots of Seismic Danger Extracted by Properties of Low-Frequency Seismic Noise
NASA Astrophysics Data System (ADS)
Lyubushin, Alexey
2013-04-01
A new method of seismic danger estimate is presented which is based on using properties of low-frequency seismic noise from broadband networks. Two statistics of noise waveforms are considered: multi-fractal singularity spectrum support width D and minimum normalized entropy En of squared orthogonal wavelet coefficients. The maps of D and En are plotted in the moving time window. Let us call the regions extracted by low values of D and high values of En as "spots of seismic danger" - SSD. Mean values of D and En are strongly anti-correlated - that is why statistics D and En extract the same SSD. Nevertheless their mutual considering is expedient because these parameters are based on different approaches. The physical mechanism which underlies the method is consolidation of small blocks of the Earth's crust into the large one before the strong earthquake. This effect has a consequence that seismic noise does not include spikes which are connected with mutual movements of small blocks. The absence of irregular spikes in the noise follows the decreasing of D and increasing of entropy En. The stability in space and size of the SSD provides estimates of the place and energy of the probable future earthquake. The increasing or decreasing of SSD size and minimum or maximum values of D and En within SSD allows estimate the trend of seismic danger. The method is illustrating by the analysis of seismic noise from broadband seismic network F-net in Japan [1-5]. Statistically significant decreasing of D allowed a hypothesis about approaching Japan to a future seismic catastrophe to be formulated at the middle of 2008. The peculiarities of correlation coefficient estimate within 1 year time window between median values of D and generalized Hurst exponent allowed to make a decision that starting from July of 2010 Japan come to the state of waiting strong earthquake [3]. The method extracted a huge SSD near Japan which includes the region of future Tohoku mega-earthquake and the region of Nankai Trough. The analysis of seismic noise after March 2011 indicates increasing of probability of the 2nd mega-earthquake starting from the middle of 2013 within the region of Nankai Trough which remains to be SSD. References 1. Lyubushin A. Multifractal Parameters of Low-Frequency Microseisms // V. de Rubeis et al. (eds.), Synchronization and Triggering: from Fracture to Earthquake Processes, GeoPlanet: Earth and Planetary Sciences 1, DOI 10.1007/978-3-642-12300-9_15, Springer-Verlag Berlin Heidelberg, 2010, 388p., Chapter 15, pp.253-272. http://www.springerlink.com/content/hj2l211577533261/ 2. Lyubushin A.A. Synchronization of multifractal parameters of regional and global low-frequency microseisms - European Geosciences Union General Assembly 2010, Vienna, 02-07 of May, 2010, Geophysical Research Abstracts, Vol. 12, EGU2010-696, 2010. http://meetingorganizer.copernicus.org/EGU2010/EGU2010-696.pdf 3. Lyubushin A.A. Synchronization phenomena of low-frequency microseisms. European Seismological Commission, 32nd General Assembly, September 06-10, 2010, Montpelier, France. Book of abstracts, p.124, session ES6. http://alexeylyubushin.narod.ru/ESC-2010_Book_of_abstracts.pdf 4. Lyubushin A.A. Seismic Catastrophe in Japan on March 11, 2011: Long-Term Prediction on the Basis of Low-Frequency Microseisms - Izvestiya, Atmospheric and Oceanic Physics, 2011, Vol. 46, No. 8, pp. 904-921. http://www.springerlink.com/content/kq53j2667024w715/ 5. Lyubushin, A. Prognostic properties of low-frequency seismic noise. Natural Science, 4, 659-666.doi: 10.4236/ns.2012.428087. http://www.scirp.org/journal/PaperInformation.aspx?paperID=21656
NASA Astrophysics Data System (ADS)
Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2018-03-01
Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.
Seismic Analysis of Three Bomb Explosions in Turkey
NASA Astrophysics Data System (ADS)
Necmioglu, O.; Semin, K. U.; Kocak, S.; Destici, C.; Teoman, U.; Ozel, N. M.
2016-12-01
Seismic analysis of three vehicle-installed bomb explosions occurred on 13 March 2016 in Ankara, 12 May 2016 in Diyarbakır and 9 July 2016 in Mardin have been conducted using data from the nearest stations (LOD, DYBB and MAZI) of the Boğaziçi University - Kandilli Observatory and Earthquake Research Institute's (KOERI) seismic network and compared with low-magnitude earthquakes in similar distance based on phase readings and frequency content. Amplitude spectra has been compared through Fourier transformation and earthquake-explosion frequency discrimination has been performed using various filter bands. Time-domain and spectral analysis have been performed using Geotool software provided by CTBTO. Local magnitude (ML) values have been calculated for each explosion by removing instrument-response and adding Wood-Anderson type instrument response. Approximate amount of explosives used in these explosions have been determined using empirical methods of Koper (2002). Preliminary results indicated that 16 tons TNT equivalent explosives have been used in 12 May 2016 Diyarbakır explosion, which is very much in accordance with the media reports claiming 15 tons of TNT. Our analysis for 9 July 2016 Mardin explosion matched the reported 5 tons of explosives. Results concerning 13 March 2016 Ankara explosion indicated that approximately 1,7 ton of TNT equivalent explosives were used in the attack whereas security and intelligence reports claimed 300 kg explosives as a combination of TNT, RDX and ammonium nitrate. The overestimated results obtained in our analysis for the Ankara explosion may be related due to i) high relative effectiveness factor of the RDX component of the explosive ii) inefficiency of Koper (2002) method in lower yields (since the method was developed using explosions with yields of 3-12 tons of TNT), iii) combination of both.
NASA Astrophysics Data System (ADS)
Maskar, A. D.; Madhekar, S. N.; Phatak, D. R.
2017-11-01
The knowledge of seismic active earth pressure behind the rigid retaining wall is very essential in the design of retaining wall in earthquake prone regions. Commonly used Mononobe-Okabe (MO) method considers pseudo-static approach. Recently there are many pseudo-dynamic methods used to evaluate the seismic earth pressure. However, available pseudo-static and pseudo-dynamic methods do not incorporate the effect of wall movement on the earth pressure distribution. Dubrova (Interaction between soils and structures, Rechnoi Transport, Moscow, 1963) was the first, who considered such effect and till date, it is used for cohesionless soil, without considering the effect of seismicity. In this paper, Dubrova's model based on redistribution principle, considering the seismic effect has been developed. It is further used to compute the distribution of seismic active earth pressure, in a more realistic manner, by considering the effect of wall movement on the earth pressure, as it is displacement based method. The effects of a wide range of parameters like soil friction angle (ϕ), wall friction angle (δ), horizontal and vertical seismic acceleration coefficients (kh and kv); on seismic active earth pressure (Kae) have been studied. Results are presented for comparison of pseudo-static and pseudo-dynamic methods, to highlight the realistic, non-linearity of seismic active earth pressure distribution. The current study results in the variation of Kae with kh in the same manner as that of MO method and Choudhury and Nimbalkar (Geotech Geol Eng 24(5):1103-1113, 2006) study. To increase in ϕ, there is a reduction in static as well as seismic earth pressure. Also, by keeping constant ϕ value, as kh increases from 0 to 0.3, earth pressure increases; whereas as δ increases, active earth pressure decreases. The seismic active earth pressure coefficient (Kae) obtained from the present study is approximately same as that obtained by previous researchers. Though seismic earth pressure obtained by pseudo-dynamic approach and seismic earth pressure obtained by redistribution principle have different background of formulation, the final earth pressure distribution is approximately same.
NASA Astrophysics Data System (ADS)
Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Bourrier, Franck; Berger, Frédéric; Bornemann, Pierrick; Borgniet, Laurent; Tardif, Pascal; Mermin, Eric
2016-04-01
Understanding the dynamics of rockfalls is critical to mitigate the associated hazards but is made very difficult by the nature of these natural disasters that makes them hard to observe directly. Recent advances in seismology allow to determine the dynamics of the largest landslides on Earth from the very low-frequency seismic waves they generate. However, the vast majority of rockfalls that occur worldwide are too small to generate such low-frequency seismic waves and thus these methods cannot be used to reconstruct their dynamics. However, if seismic sensors are close enough, these events will generate high-frequency seismic signals. Unfortunately we cannot yet use these high-frequency seismic records to infer parameters synthetizing the rockfall dynamics as the source of these waves is not well understood. One of the first steps towards understanding the physical processes involved in the generation of high-frequency seismic waves by rockfalls is to study the link between the dynamics of a single block propagating along a well-known path and the features of the seismic signal generated. We conducted controlled releases of single blocks of limestones in a gully of clay-shales (e.g. black marls) in the Rioux Bourdoux torrent (French Alps). 28 blocks, with masses ranging from 76 kg to 472 kg, were released. A monitoring network combining high-velocity cameras, a broadband seismometer and an array of 4 high-frequency seismometers was deployed near the release area and along the travel path. The high-velocity cameras allow to reconstruct the 3D trajectories of the blocks, to estimate their velocities and the position of the different impacts with the slope surface. These data are compared to the seismic signals recorded. As the distance between the block and the seismic sensors at the time of each impact is known, we can determine the associated seismic signal amplitude corrected from propagation and attenuation effects. We can further compare the velocity, the energy and the momentum of the block at each impact to the true amplitude and the energy of the corresponding part of the seismic signal. Finding potential correlations and scaling laws between the dynamics of the source and the high-frequency seismic signal features constitutes an important breakthrough to understand more complex slope movements that involve multiple blocks or granular flows. This approach may lead to future developments of methods able to determine the dynamics of a large variety of slope movements directly from the seismic signals they generate.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction
NASA Astrophysics Data System (ADS)
Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo
2014-12-01
To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.
NASA Astrophysics Data System (ADS)
Maeda, T.; Nishida, K.; Takagi, R.; Obara, K.
2015-12-01
The high-sensitive seismograph network Japan (Hi-net) operated by National Research Institute for Earth Science and Disaster Prevention (NIED) has about 800 stations with average separation of 20 km. We can observe long-period seismic wave propagation as a 2D wavefield with station separations shorter than wavelength. In contrast, short-period waves are quite incoherent at stations, however, their envelope shapes resemble at neighbor stations. Therefore, we may be able to extract seismic wave energy propagation by seismogram envelope analysis. We attempted to characterize seismic waveform at long-period and its envelope at short-period as 2D wavefield by applying seismic gradiometry. We applied the seismic gradiometry to a synthetic long-period (20-50s) dataset prepared by numerical simulation in realistic 3D medium at the Hi-net station layout. Wave amplitude and its spatial derivatives are estimated by using data at nearby stations. The slowness vector, the radiation pattern and the geometrical spreading are extracted from estimated velocity, displacement and its spatial derivatives. For short-periods at shorter than 1 s, seismogram envelope shows temporal and spatial broadening through scattering by medium heterogeneity. It is expected that envelope shape may be coherent among nearby stations. Based on this idea, we applied the same method to the time-integration of seismogram envelope to estimate its spatial derivatives. Together with seismogram envelope, we succeeded in estimating the slowness vector from the seismogram envelope as well as long-period waveforms by synthetic test, without using phase information. Our preliminarily results show that the seismic gradiometry suits the Hi-net to extract wave propagation characteristics both at long and short periods. This method is appealing that it can estimate waves at homogeneous grid to monitor seismic wave as a wavefield. It is promising to obtain phase velocity variation from direct waves, and to grasp wave packets originating from scattering from coda, by applying the seismic gradiometry to the Hi-net.
Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics
NASA Astrophysics Data System (ADS)
Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.
2018-03-01
Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.
Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics
NASA Astrophysics Data System (ADS)
Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.
2018-06-01
Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.
NASA Astrophysics Data System (ADS)
Chan, Chun-Kai; Loh, Chin-Hsiung; Wu, Tzu-Hsiu
2015-04-01
In civil engineering, health monitoring and damage detection are typically carry out by using a large amount of sensors. Typically, most methods require global measurements to extract the properties of the structure. However, some sensors, like LVDT, cannot be used due to in situ limitation so that the global deformation remains unknown. An experiment is used to demonstrate the proposed algorithms: a one-story 2-bay reinforce concrete frame under weak and strong seismic excitation. In this paper signal processing techniques and nonlinear identification are used and applied to the response measurements of seismic response of reinforced concrete structures subject to different level of earthquake excitations. Both modal-based and signal-based system identification and feature extraction techniques are used to study the nonlinear inelastic response of RC frame using both input and output response data or output only measurement. From the signal-based damage identification method, which include the enhancement of time-frequency analysis of acceleration responses and the estimation of permanent deformation using directly from acceleration response data. Finally, local deformation measurement from dense optical tractor is also use to quantify the damage of the RC frame structure.
NASA Astrophysics Data System (ADS)
Dietrich, Carola; Wölbern, Ingo; Faria, Bruno; Rümpker, Georg
2017-04-01
Fogo is the only island of the Cape Verde archipelago with regular occurring volcanic eruptions since its discovery in the 15th century. The volcanism of the archipelago originates from a mantle plume beneath an almost stationary tectonic plate. With an eruption interval of approximately 20 years, Fogo belongs to the most active oceanic volcanoes. The latest eruption started in November 2014 and ceased in February 2015. This study aims to characterize and investigate the seismic activity and the magmatic plumbing system of Fogo, which is believed to be related to a magmatic source close to the neighboring island of Brava. According to previous studies, using conventional seismic network configurations, most of the seismic activity occurs offshore. Therefore, seismological array techniques represent powerful tools in investigating earthquakes and other volcano-related events located outside of the networks. Another advantage in the use of seismic arrays is their possibility to detect events of relatively small magnitude and to locate seismic signals without a clear onset of phases, such as volcanic tremors. Since October 2015 we have been operating a test array on Fogo as part of a pilot study. This array consists of 10 seismic stations, distributed in a circular shape with an aperture of 700 m. The stations are equipped with Omnirecs CUBE dataloggers, and either 4.5 Hz geophones (7 stations) or Trillium-Compact broad-band seismometers (3 stations). In January 2016 we installed three additional broad-band stations distributed across the island of Fogo to improve the capabilities for event localization. The data of the pilot study is dominated by seismic activity around Brava, but also exhibit tremors and hybrid events of unknown origin within the caldera of Fogo volcano. The preliminary analysis of these events includes the characterization and localization of the different event types using seismic array processing in combination with conventional localization methods. In the beginning of August 2016, a "seismic crisis" occurred on the island of Brava which led to the evacuation of a village. The seismic activity recorded by our instruments on Fogo exhibits more than 40 earthquakes during this time. Locations and magnitudes of these events will be presented. In January 2017 the pilot project discussed here will be complemented by three additional seismic arrays (two on Fogo, one on Brava) to improve seismic event localization and structural imaging based on scattered seismic phases by using multi-array techniques. Initial recordings from the new arrays are expected to be available by April 2017.
NASA Astrophysics Data System (ADS)
Schulte-Pelkum, V.; Condit, C.; Brownlee, S. J.; Mahan, K. H.; Raju, A.
2016-12-01
We investigate shear zone-related deformation fabric from field samples, its dependence on conditions during fabric formation, and its detection in situ using seismic data. We present a compilation of published rock elasticity tensors measured in the lab or calculated from middle and deep crustal samples and compare the strength and symmetry of seismic anisotropy as a function of location within a shear zone, pressure-temperature conditions during formation, and composition. Common strengths of seismic anisotropy range from a few to 10 percent. Apart from the typically considered fabric in mica, amphibole and quartz also display fabrics that induce seismic anisotropy, although the interaction between different minerals can result in destructive interference in the total measured anisotropy. The availability of full elasticity tensors enables us to predict the seismic signal from rock fabric at depth. A method particularly sensitive to anisotropy of a few percent in localized zones of strain at depth is the analysis of azimuthally dependent amplitude and polarity variations in teleseismic receiver functions. We present seismic results from California and Colorado. In California, strikes of seismically detected fabric show a strong alignment with current strike-slip motion between the Pacific and North American plates, with high signal strength near faults and from depths below the brittle-ductile transition. These results suggest that the faults have roots in the ductile crust; determining the degree of localization, i.e., the width of the fault-associated shear zones, would require an analysis with denser station coverage, which now exists in some areas. In Colorado, strikes of seismically detected fabric show a broad NW-SE to NNW-SSE alignment that may be related to Proterozoic fabric developed at high temperatures, but locally may also show isotropic dipping contrasts associated with Laramide faulting. The broad trend is punctuated with NE-SW-trending strikes parallel to exhumed and highly localized structures such as the Idaho Springs-Ralston and Black Canyon shear zones. In either case, denser seismic studies should elucidate the width of the deep seismic expression of the shear zones.
Time lapse (4D) and AVO analysis: A case study of Gullfaks field, Northern North Sea
NASA Astrophysics Data System (ADS)
Umoren, Emmanuel Bassey; George, Nyakno Jimmy
2018-06-01
A 4D seismic or time lapse survey has been used to investigate the amplitude versus offset (AVO) effects on seismic data in order to identify anomalies in the Gullfaks field for three different reservoir intervals namely the Tarbert, Cook and Statfjord reservoirs. Repeatability analysis has shown that the earlier seismic vintages are the most unreliable for amplitude anomaly analysis as normalised root-mean square (NRMS) values are greater than 50%. This is above the threshold of good and medium repeatability. Fluid substitution models show increases in both P-wave velocity and density for increasing water saturations with a maximum change of 7.33% in the P-wave velocity, and this is in line with predictions from previous work using the Biot - Gassman equations. AVO modelling for the top Tarbert Formation interface produced scenarios of increasing amplitudes with offset for the presence of hydrocarbons, which dim out with 100% brine saturation. This correlates to class III gas sands for different situations of varying Poisson's ratio across an interface, which has been previously modelled. Two anomalies were identified with one being related to increasing pressure due to water injection correlating to poor permeability around injector well 34/10-B-33. The second anomaly is a case of potential unswept hydrocarbons that displayed a consistent bright spot throughout all of the seismic vintages (in-inlines and crosslines). AVO attribute analysis of this event produced a class II anomaly. However, when comparing near and far offset seismic data, dimming effect was observed producing contrasting evidence. The dimming offset is viewed to have been as a result of poor repeatability values at far offsets. The modelling of the fluid contents in the studied formations to conform to existing literatures justifies the efficacy of the method.
New seismic array solution for earthquake observations and hydropower plant health monitoring
NASA Astrophysics Data System (ADS)
Antonovskaya, Galina N.; Kapustian, Natalya K.; Moshkunov, Alexander I.; Danilov, Alexey V.; Moshkunov, Konstantin A.
2017-09-01
We present the novel fusion of seismic safety monitoring data of the hydropower plant in Chirkey (Caucasus Mountains, Russia). This includes new hardware solutions and observation methods, along with technical limitations for three types of applications: (a) seismic monitoring of the Chirkey reservoir area, (b) structure monitoring of the dam, and (c) monitoring of turbine vibrations. Previous observations and data processing for health monitoring do not include complex data analysis, while the new system is more rational and less expensive. The key new feature of the new system is remote monitoring of turbine vibration. A comparison of the data obtained at the test facilities and by hydropower plant inspection with remote sensors enables early detection of hazardous hydrodynamic phenomena.
NASA Astrophysics Data System (ADS)
Grasso, S.; Maugeri, M.
After the Summit held in Washington on August 20-22 2001 to plan the first World Conference on the mitigation of Natural Hazards, a Group for the analysis of Natural Hazards within the Mediterranean area has been formed. The Group has so far determined the following hazards: (1) Seismic hazard (hazard for historical buildings included); (2) Hazard linked to the quantity and quality of water; (3) Landslide hazard; (4) Volcanic hazard. The analysis of such hazards implies the creation and the management of data banks, which can only be used if the data are properly geo-settled to allow a crossed use of them. The obtained results must be therefore represented on geo-settled maps. The present study is part of a research programme, namely "Detailed Scenarios and Actions for Seismic Prevention of Damage in the Urban Area of Catania", financed by the National Department for the Civil Protection and the National Research Council-National Group for the Defence Against Earthquakes (CNR-GNDT). Nowadays the south-eastern area of Sicily, called the "Iblea" seismic area of Sicily, is considered as one of the most intense seismic zones in Italy, based on the past and current seismic history and on the typology of civil buildings. Safety against earthquake hazards has two as pects: structural safety against potentially destructive dynamic forces and site safety related to geotechnical phenomena such as amplification, land sliding and soil liquefaction. So the correct evaluation of seismic hazard is highly affected by risk factors due to geological nature and geotechnical properties of soils. The effect of local geotechnical conditions on damages suffered by buildings under seismic conditions has been widely recognized, as it is demonstrated by the Manual for Zonation on Seismic Geotechnical Hazards edited by the International Society for Soil Mechanics and Geotechnical Engineering (TC4, 1999). The evaluation of local amplification effects may be carried out by means of either rigorous complex methods of analysis or qualitative procedures. A semi quantitative procedure based on the definition of the geotechnical hazard index has been applied for the zonation of the seismic geotechnical hazard of the city of Catania. In particular this procedure has been applied to define the influence of geotechnical properties of soil in a central area of the city of Catania, where some historical buildings of great importance are sited. It was also performed an investigation based on the inspection of more than one hundred historical ecclesiastical buildings of great importance, located in the city. Then, in order to identify the amplification effects due to the site conditions, a geotechnical survey form was prepared, to allow a semi quantitative evaluation of the seismic geotechnical hazard for all these historical buildings. In addition, to evaluate the foundation soil time -history response, a 1-D dynamic soil model was employed for all these buildings, considering the non linearity of soil behaviour. Using a GIS, a map of the seismic geotechnical hazard, of the liquefaction hazard and a preliminary map of the seismic hazard for the city of Catania have been obtained. From the analysis of obtained results it may be noticed that high hazard zones are mainly clayey sites
NASA Astrophysics Data System (ADS)
Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi
2018-03-01
Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.
NASA Astrophysics Data System (ADS)
Zeng, Zhi-Ping; Zhao, Yan-Gang; Xu, Wen-Tao; Yu, Zhi-Wu; Chen, Ling-Kun; Lou, Ping
2015-04-01
The frequent use of bridges in high-speed railway lines greatly increases the probability that trains are running on bridges when earthquakes occur. This paper investigates the random vibrations of a high-speed train traversing a slab track on a continuous girder bridge subjected to track irregularities and traveling seismic waves by the pseudo-excitation method (PEM). To derive the equations of motion of the train-slab track-bridge interaction system, the multibody dynamics and finite element method models are used for the train and the track and bridge, respectively. By assuming track irregularities to be fully coherent random excitations with time lags between different wheels and seismic accelerations to be uniformly modulated, non-stationary random excitations with time lags between different foundations, the random load vectors of the equations of motion are transformed into a series of deterministic pseudo-excitations based on PEM and the wheel-rail contact relationship. A computer code is developed to obtain the time-dependent random responses of the entire system. As a case study, the random vibration characteristics of an ICE-3 high-speed train traversing a seven-span continuous girder bridge simultaneously excited by track irregularities and traveling seismic waves are analyzed. The influence of train speed and seismic wave propagation velocity on the random vibration characteristics of the bridge and train are discussed.
NASA Astrophysics Data System (ADS)
Nakatani, Y.; Mochizuki, K.; Shinohara, M.; Yamada, T.; Hino, R.; Ito, Y.; Murai, Y.; Sato, T.
2013-12-01
A subducting seamount which has a height of about 3 km was revealed off Ibaraki in the Japan Trench by a seismic survey (Mochizuki et al., 2008). Mochizuki et al. (2008) also interpreted that interplate coupling was weak over the seamount because seismicity was low and the slip of the recent large earthquake did not propagate over it. To carry out further investigation, we deployed dense ocean bottom seismometers (OBSs) array around the seamount for about a year. During the observation period, seismicity off Ibaraki was activated due to the occurrence of the 2011 Tohoku earthquake. The southern edge of the mainshock rupture area was considered to be located around off Ibaraki by many source analyses. Moreover, Kubo et al. (2013) proposes the seamount played an important role in the rupture termination of the largest aftershock. Therefore, in this study, we try to understand about spatiotemporal variation of seismicity around the seamount before and after the Mw 9.0 event as a first step to elucidate relationship between the subducting seamount and seismogenic behavior. We used velocity waveforms of 1 Hz long-term OBSs which were densely deployed at station intervals of about 6 km. The sampling rate is 200 Hz and the observation period is from October 16, 2010 to September 19, 2011. Because of the ambient noise and effects of thick seafloor sediments, it is difficult to apply methods which have been used to on-land observational data for detecting seismicity to OBS data and to handle continuous waveforms automatically. We therefore apply back-projection method (e.g., Kiser and Ishii, 2012) to OBS waveform data which estimate energy-release source by stacking waveforms. Among many back-projection methods, we adopt a semblance analysis (e.g., Honda et al., 2008) which can detect feeble waves. First of all, we constructed a 3-D velocity structure model off Ibaraki by compiling the results of marine seismic surveys (e.g., Nakahigashi et al., 2012). Then, we divided a target area into small areas and calculated P-wave traveltimes between each station and all small areas by fast marching method (Rawlinson et al., 2006). After constructing theoretical travel-time tables, we applied a proper frequency filter to the observed waveforms and estimated seismic energy release by projecting semblance values. As the result of applying our method, we could successfully detect magnitude 2-3 earthquakes.
Seismic signal time-frequency analysis based on multi-directional window using greedy strategy
NASA Astrophysics Data System (ADS)
Chen, Yingpin; Peng, Zhenming; Cheng, Zhuyuan; Tian, Lin
2017-08-01
Wigner-Ville distribution (WVD) is an important time-frequency analysis technology with a high energy distribution in seismic signal processing. However, it is interfered by many cross terms. To suppress the cross terms of the WVD and keep the concentration of its high energy distribution, an adaptive multi-directional filtering window in the ambiguity domain is proposed. This begins with the relationship of the Cohen distribution and the Gabor transform combining the greedy strategy and the rotational invariance property of the fractional Fourier transform in order to propose the multi-directional window, which extends the one-dimensional, one directional, optimal window function of the optimal fractional Gabor transform (OFrGT) to a two-dimensional, multi-directional window in the ambiguity domain. In this way, the multi-directional window matches the main auto terms of the WVD more precisely. Using the greedy strategy, the proposed window takes into account the optimal and other suboptimal directions, which also solves the problem of the OFrGT, called the local concentration phenomenon, when encountering a multi-component signal. Experiments on different types of both the signal models and the real seismic signals reveal that the proposed window can overcome the drawbacks of the WVD and the OFrGT mentioned above. Finally, the proposed method is applied to a seismic signal's spectral decomposition. The results show that the proposed method can explore the space distribution of a reservoir more precisely.
Determination of Paleoseismic Ground Motions from Inversion of Block Failures in Masonry Structures
NASA Astrophysics Data System (ADS)
Yagoda-Biran, G.; Hatzor, Y. H.
2010-12-01
Accurate estimation of ground motion parameters such as expected peak ground acceleration (PGA), predominant frequency and duration of motion in seismically active regions, is crucial for hazard preparedness and sound engineering design. The best way to estimate quantitatively these parameters would be to investigate long term recorded data of past strong earthquakes in a studied region. In some regions of the world however recorded data are scarce due to lack of seismic network infrastructure, and in all regions the availability of recorded data is restricted to the late 19th century and onwards. Therefore, existing instrumental data are hardly representative of the true seismicity of a region. When recorded data are scarce or not available, alternative methods may be applied, for example adopting a quantitative paleoseismic approach. In this research we suggest the use of seismically damaged masonry structures as paleoseismic indicators. Visitors to archeological sites all over the world are often struck by structural failure features which seem to be "seismically driven", particularly when inspecting old masonry structures. While it is widely accepted that no other loading mechanism can explain the preserved damage, the actual driving mechanism remains enigmatic even now. In this research we wish to explore how such failures may be triggered by earthquake induced ground motions and use observed block displacements to determine the characteristic parameters of the paleoseismic earthquake motion, namely duration, frequency, and amplitude. This is performed utilizing a 3D, fully dynamic, numerical analysis performed with the Discontinuous Deformation Analysis (DDA) method. Several case studies are selected for 3D numerical analysis. First we study a simple structure in the old city of L'Aquila, Italy. L'Aquila was hit by an earthquake on April 6th, 2009, with over 300 casualties and many of its medieval buildings damaged. This case study is an excellent opportunity to validate our method, since in the case of L'Aquila, both the damaged structure and the ground motions are recorded. The 3D modeling of the structure is rather complicated, and is performed by first modeling the structure with CAD software and later "translating" the model to the numerical code used. In the future, several more case studies will be analyzed, such as Kedesh and Avdat in Israel, and in collaboration with Hugh and Bilham the Temple of Shiva at Pandrethan, Kashmir. Establishing a numerical 3D dynamic analysis for back analysis of stone displacement in masonry structures as a paleoseismic tool can provide much needed data on ground motion parameters in regions where instrumental data are scarce, or are completely absent.
NASA Astrophysics Data System (ADS)
Giammanco, S.; Ferrera, E.; Cannata, A.; Montalto, P.; Neri, M.
2013-12-01
From November 2009 to April 2011 soil radon activity was continuously monitored using a Barasol probe located on the upper NE flank of Mt. Etna volcano (Italy), close both to the Piano Provenzana fault and to the NE-Rift. Seismic, volcanological and radon data were analysed together with data on environmental parameters, such as air and soil temperature, barometric pressure, snow and rain fall. In order to find possible correlations among the above parameters, and hence to reveal possible anomalous trends in the radon time-series, we used different statistical methods: i) multivariate linear regression; ii) cross-correlation; iii) coherence analysis through wavelet transform. Multivariate regression indicated a modest influence on soil radon from environmental parameters (R2 = 0.31). When using 100-day time windows, the R2 values showed wide variations in time, reaching their maxima (~0.63-0.66) during summer. Cross-correlation analysis over 100-day moving averages showed that, similar to multivariate linear regression analysis, the summer period was characterised by the best correlation between radon data and environmental parameters. Lastly, the wavelet coherence analysis allowed a multi-resolution coherence analysis of the time series acquired. This approach allowed to study the relations among different signals either in the time or in the frequency domain. It confirmed the results of the previous methods, but also allowed to recognize correlations between radon and environmental parameters at different observation scales (e.g., radon activity changed during strong precipitations, but also during anomalous variations of soil temperature uncorrelated with seasonal fluctuations). Using the above analysis, two periods were recognized when radon variations were significantly correlated with marked soil temperature changes and also with local seismic or volcanic activity. This allowed to produce two different physical models of soil gas transport that explain the observed anomalies. Our work suggests that in order to make an accurate analysis of the relations among different signals it is necessary to use different techniques that give complementary analytical information. In particular, the wavelet analysis showed to be the most effective in discriminating radon changes due to environmental influences from those correlated with impending seismic or volcanic events.
NASA Astrophysics Data System (ADS)
Bassett, D.; Watts, A. B.; Sandwell, D. T.; Fialko, Y. A.
2016-12-01
We performed shear wave splitting analysis on 203 permanent (French RLPB, CEA and Catalonian networks) and temporary (PYROPE and IberArray experiments) broad-band stations around the Pyrenees. These measurements considerably enhance the spatial resolution and coverage of seismic anisotropy in that region. In particular, we characterize with different shear wave splitting analysis methods the small-scale variations of splitting parameters φ and δt along three dense transects crossing the western and central Pyrenees with an interstation spacing of about 7 km. While we find a relatively coherent seismic anisotropy pattern in the Pyrenean domain, we observe abrupt changes of splitting parameters in the Aquitaine Basin and delay times along the Pyrenees. We moreover observe coherent fast directions despite complex lithospheric structures in Iberia and the Massif Central. This suggests that two main sources of anisotropy are required to interpret seismic anisotropy in this region: (i) lithospheric fabrics in the Aquitaine Basin (probably frozen-in Hercynian anisotropy) and in the Pyrenees (early and late Pyrenean dynamics); (ii) asthenospheric mantle flow beneath the entire region (imprint of the western Mediterranean dynamics since the Oligocene).
Kayen, R.; Moss, R.E.S.; Thompson, E.M.; Seed, R.B.; Cetin, K.O.; Der Kiureghian, A.; Tanaka, Y.; Tokimatsu, K.
2013-01-01
Shear-wave velocity (Vs) offers a means to determine the seismic resistance of soil to liquefaction by a fundamental soil property. This paper presents the results of an 11-year international project to gather new Vs site data and develop probabilistic correlations for seismic soil liquefaction occurrence. Toward that objective, shear-wave velocity test sites were identified, and measurements made for 301 new liquefaction field case histories in China, Japan, Taiwan, Greece, and the United States over a decade. The majority of these new case histories reoccupy those previously investigated by penetration testing. These new data are combined with previously published case histories to build a global catalog of 422 case histories of Vs liquefaction performance. Bayesian regression and structural reliability methods facilitate a probabilistic treatment of the Vs catalog for performance-based engineering applications. Where possible, uncertainties of the variables comprising both the seismic demand and the soil capacity were estimated and included in the analysis, resulting in greatly reduced overall model uncertainty relative to previous studies. The presented data set and probabilistic analysis also help resolve the ancillary issues of adjustment for soil fines content and magnitude scaling factors.
NASA Astrophysics Data System (ADS)
Wei, Jia; Liu, Huaishan; Xing, Lei; Du, Dong
2018-02-01
The stability of submarine geological structures has a crucial influence on the construction of offshore engineering projects and the exploitation of seabed resources. Marine geologists should possess a detailed understanding of common submarine geological hazards. Current marine seismic exploration methods are based on the most effective detection technologies. Therefore, current research focuses on improving the resolution and precision of shallow stratum structure detection methods. In this article, the feasibility of shallow seismic structure imaging is assessed by building a complex model, and differences between the seismic interferometry imaging method and the traditional imaging method are discussed. The imaging effect of the model is better for shallow layers than for deep layers because coherent noise produced by this method can result in an unsatisfactory imaging effect for deep layers. The seismic interference method has certain advantages for geological structural imaging of shallow submarine strata, which indicates continuous horizontal events, a high resolution, a clear fault, and an obvious structure boundary. The effects of the actual data applied to the Shenhu area can fully illustrate the advantages of the method. Thus, this method has the potential to provide new insights for shallow submarine strata imaging in the area.
Proceedings of the 11th Annual DARPA/AFGL Seismic Research symposium
NASA Astrophysics Data System (ADS)
Lewkowicz, James F.; McPhetres, Jeanne M.
1990-11-01
The following subjects are covered: near source observations of quarry explosions; small explosion discrimination and yield estimation; Rg as a depth discriminant for earthquakes and explosions: a case study in New England; a comparative study of high frequency seismic noise at selected sites in the USSR and USA; chemical explosions and the discrimination problem; application of simulated annealing to joint hypocenter determination; frequency dependence of Q(sub Lg) and Q in the continental crust; statistical approaches to testing for compliance with a threshold test ban treaty; broad-band studies of seismic sources at regional and teleseismic distances using advanced time series analysis methods; effects of depth of burial and tectonic release on regional and teleseismic explosion waveforms; finite difference simulations of seismic wave excitation at Soviet test sites with deterministic structures; stochastic geologic effects on near-field ground motions; the damage mechanics of porous rock; nonlinear attenuation mechanism in salt at moderate strain; compressional- and shear-wave polarizations at the Anza seismic array; and a generalized beamforming approach to real time network detection and phase association.
A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.
NASA Astrophysics Data System (ADS)
di Luccio, F.; Pino, A.; Thio, H.
2002-12-01
We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.
NASA Astrophysics Data System (ADS)
Colombero, Chiara; Baillet, Laurent; Comina, Cesare; Jongmans, Denis; Vinciguerra, Sergio
2016-04-01
Appropriate characterization and monitoring of potentially unstable rock masses may provide a better knowledge of the active processes and help to forecast the evolution to failure. Among the available geophysical methods, active seismic surveys are often suitable to infer the internal structure and the fracturing conditions of the unstable body. For monitoring purposes, although remote-sensing techniques and in-situ geotechnical measurements are successfully tested on landslides, they may not be suitable to early forecast sudden rapid rockslides. Passive seismic monitoring can help for this purpose. Detection, classification and localization of microseismic events within the prone-to-fall rock mass can provide information about the incipient failure of internal rock bridges. Acceleration to failure can be detected from an increasing microseismic event rate. The latter can be compared with meteorological data to understand the external factors controlling stability. On the other hand, seismic noise recorded on prone-to-fall rock slopes shows that the temporal variations in spectral content and correlation of ambient vibrations can be related to both reversible and irreversible changes within the rock mass. We present the results of the active and passive seismic data acquired at the potentially unstable granitic cliff of Madonna del Sasso (NW Italy). Down-hole tests, surface refraction and cross-hole tomography were carried out for the characterization of the fracturing state of the site. Field surveys were implemented with laboratory determination of physico-mechanical properties on rock samples and measurements of the ultrasonic pulse velocity. This multi-scale approach led to a lithological interpretation of the seismic velocity field obtained at the site and to a systematic correlation of the measured velocities with physical properties (density and porosity) and macroscopic features of the granitic cliff (fracturing, weathering and anisotropy). Continuous passive seismic monitoring at the site, from October 2013 to present, systematically highlighted clear energy peaks in the spectral content of seismic noise on the unstable sector, interpreted as resonant frequencies of the investigated volume. Both spectral analysis and cross-correlation of seismic noise showed seasonal reversible variation trends related to air temperature fluctuations. No irreversible changes, resulting from serious damage processes within the rock mass, were detected so far. Modal analysis and geomechanical modeling of the unstable cliff are currently under investigation to better understand the vibration modes that could explain the measured amplitude and orientation of ground motion at the first resonant frequencies. Classification and location of microseismic events still remains the most challenging task, due to the complex structural and morphological setting of the site.
A proposal for seismic evaluation index of mid-rise existing RC buildings in Afghanistan
NASA Astrophysics Data System (ADS)
Naqi, Ahmad; Saito, Taiki
2017-10-01
Mid-rise RC buildings gradually rise in Kabul and entire Afghanistan since 2001 due to rapid increase of population. To protect the safety of resident, Afghan Structure Code was issued in 2012. But the building constructed before 2012 failed to conform the code requirements. In Japan, new sets of rules and law for seismic design of buildings had been issued in 1981 and severe earthquake damage was disclosed for the buildings designed before 1981. Hence, the Standard for Seismic Evaluation of RC Building published in 1977 has been widely used in Japan to evaluate the seismic capacity of existing buildings designed before 1981. Currently similar problem existed in Afghanistan, therefore, this research examined the seismic capacity of six RC buildings which were built before 2012 in Kabul by applying the seismic screening procedure presented by Japanese standard. Among three screening procedures with different capability, the less detailed screening procedure, the first level of screening, is applied. The study founds an average seismic index (IS-average=0.21) of target buildings. Then, the results were compared with those of more accurate seismic evaluation procedures of Capacity Spectrum Method (CSM) and Time History Analysis (THA). The results for CSM and THA show poor seismic performance of target buildings not able to satisfy the safety design limit (1/100) of the maximum story drift. The target buildings are then improved by installing RC shear walls. The seismic indices of these retrofitted buildings were recalculated and the maximum story drifts were analyzed by CSM and THA. The seismic indices and CSM and THA results are compared and found that building with seismic index larger than (IS-average =0.4) are able to satisfy the safety design limit. Finally, to screen and minimize the earthquake damage over the existing buildings, the judgement seismic index (IS-Judgment=0.5) for the first level of screening is proposed.
Quantitative risk analysis of oil storage facilities in seismic areas.
Fabbrocino, Giovanni; Iervolino, Iunio; Orlando, Francesca; Salzano, Ernesto
2005-08-31
Quantitative risk analysis (QRA) of industrial facilities has to take into account multiple hazards threatening critical equipment. Nevertheless, engineering procedures able to evaluate quantitatively the effect of seismic action are not well established. Indeed, relevant industrial accidents may be triggered by loss of containment following ground shaking or other relevant natural hazards, either directly or through cascade effects ('domino effects'). The issue of integrating structural seismic risk into quantitative probabilistic seismic risk analysis (QpsRA) is addressed in this paper by a representative study case regarding an oil storage plant with a number of atmospheric steel tanks containing flammable substances. Empirical seismic fragility curves and probit functions, properly defined both for building-like and non building-like industrial components, have been crossed with outcomes of probabilistic seismic hazard analysis (PSHA) for a test site located in south Italy. Once the seismic failure probabilities have been quantified, consequence analysis has been performed for those events which may be triggered by the loss of containment following seismic action. Results are combined by means of a specific developed code in terms of local risk contour plots, i.e. the contour line for the probability of fatal injures at any point (x, y) in the analysed area. Finally, a comparison with QRA obtained by considering only process-related top events is reported for reference.
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
Application of random seismic inversion method based on tectonic model in thin sand body research
NASA Astrophysics Data System (ADS)
Dianju, W.; Jianghai, L.; Qingkai, F.
2017-12-01
The oil and gas exploitation at Songliao Basin, Northeast China have already progressed to the period with high water production. The previous detailed reservoir description that based on seismic image, sediment core, borehole logging has great limitations in small scale structural interpretation and thin sand body characterization. Thus, precise guidance for petroleum exploration is badly in need of a more advanced method. To do so, we derived the method of random seismic inversion constrained by tectonic model.It can effectively improve the depicting ability of thin sand bodies, combining numerical simulation techniques, which can credibly reducing the blindness of reservoir analysis from the whole to the local and from the macroscopic to the microscopic. At the same time, this can reduce the limitations of the study under the constraints of different geological conditions of the reservoir, accomplish probably the exact estimation for the effective reservoir. Based on the research, this paper has optimized the regional effective reservoir evaluation and the productive location adjustment of applicability, combined with the practical exploration and development in Aonan oil field.
Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum
NASA Astrophysics Data System (ADS)
Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.
2017-09-01
Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.
Target Detection and Classification Using Seismic and PIR Sensors
2012-06-01
time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The
Schoenball, Martin; Kaven, Joern; Glen, Jonathan M. G.; Davatzes, Nicholas C.
2015-01-01
Increased levels of seismicity coinciding with injection of reservoir fluids have prompted interest in methods to distinguish induced from natural seismicity. Discrimination between induced and natural seismicity is especially difficult in areas that have high levels of natural seismicity, such as the geothermal fields at the Salton Sea and Coso, both in California. Both areas show swarm-like sequences that could be related to natural, deep fluid migration as part of the natural hydrothermal system. Therefore, swarms often have spatio-temporal patterns that resemble fluid-induced seismicity, and might possibly share other characteristics. The Coso Geothermal Field and its surroundings is one of the most seismically active areas in California with a large proportion of its activity occurring as seismic swarms. Here we analyze clustered seismicity in and surrounding the currently produced reservoir comparatively for pre-production and co-production periods. We perform a cluster analysis, based on the inter-event distance in a space-time-energy domain to identify notable earthquake sequences. For each event j, the closest previous event i is identified and their relationship categorized. If this nearest neighbor’s distance is below a threshold based on the local minimum of the bimodal distribution of nearest neighbor distances, then the event j is included in the cluster as a child to this parent event i. If it is above the threshold, event j begins a new cluster. This process identifies subsets of events whose nearest neighbor distances and relative timing qualify as a cluster as well as a characterizing the parent-child relationships among events in the cluster. We apply this method to three different catalogs: (1) a two-year microseismic survey of the Coso geothermal area that was acquired before exploration drilling in the area began; (2) the HYS_catalog_2013 that contains 52,000 double-difference relocated events and covers the years 1981 to 2013; and (3) a catalog of 57,000 events with absolute locations from the local network recorded between 2002 and 2007. Using this method we identify 10 clusters of more than 20 events each in the pre-production survey and more than 200 distinct seismicity clusters that each contain at least 20 and up to more than 1000 earthquakes in the more extensive catalogs. The cluster identification method used yields a hierarchy of links between multiple generations of parent and offspring events. We analyze different topological parameters of this hierarchy to better characterize and thus differentiate natural swarms from induced clustered seismicity and also to identify aftershock sequences of notable mainshocks. We find that the branching characteristic given by the average number of child events per parent event is significantly different for clusters below than for clusters around the produced field.
Fast 3D elastic micro-seismic source location using new GPU features
NASA Astrophysics Data System (ADS)
Xue, Qingfeng; Wang, Yibo; Chang, Xu
2016-12-01
In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.
NASA Astrophysics Data System (ADS)
Sil, Arjun; Longmailai, Thaihamdau
2017-09-01
The lateral displacement of Reinforced Concrete (RC) frame building during an earthquake has an important impact on the structural stability and integrity. However, seismic analysis and design of RC building needs more concern due to its complex behavior as the performance of the structure links to the features of the system having many influencing parameters and other inherent uncertainties. The reliability approach takes into account the factors and uncertainty in design influencing the performance or response of the structure in which the safety level or the probability of failure could be ascertained. This present study, aims to assess the reliability of seismic performance of a four storey residential RC building seismically located in Zone-V as per the code provisions given in the Indian Standards IS: 1893-2002. The reliability assessment performed by deriving an explicit expression for maximum roof-lateral displacement as a failure function by regression method. A total of 319, four storey RC buildings were analyzed by linear static method using SAP2000. However, the change in the lateral-roof displacement with the variation of the parameters (column dimension, beam dimension, grade of concrete, floor height and total weight of the structure) was observed. A generalized relation established by regression method which could be used to estimate the expected lateral displacement owing to those selected parameters. A comparison made between the displacements obtained from analysis with that of the equation so formed. However, it shows that the proposed relation could be used directly to determine the expected maximum lateral displacement. The data obtained from the statistical computations was then used to obtain the probability of failure and the reliability.
NASA Astrophysics Data System (ADS)
Bulatova, Dr.
2012-04-01
Modern research in the domains of Earth sciences is developing from the descriptions of each individual natural phenomena to the systematic complex research in interdisciplinary areas. For studies of its kind in the form numerical analysis of three-dimensional (3D) systems, the author proposes space-time Technology (STT), based on a Ptolemaic geocentric system, consist of two modules, each with its own coordinate system: (1) - 3D model of a Earth, the coordinates of which provides databases of the Earth's events (here seismic), and (2) - a compact model of the relative motion of celestial bodies in space - time on Earth known as the "Method of a moving source" (MDS), which was developed in MDS (Bulatova, 1998-2000) for the 3D space. Module (2) was developed as a continuation of the geocentric Ptolemaic system of the world, built on the astronomical parameters heavenly bodies. Based on the aggregation data of Space and Earth Sciences, systematization, and cooperative analysis, this is an attempt to establish a cause-effect relationship between the position of celestial bodies (Moon, Sun) and Earth's seismic events.
A Study on Seismic Hazard Evaluation at the Nagaoka CO2 Storage Site, Japan
NASA Astrophysics Data System (ADS)
Horikawa, S.
2015-12-01
RITE carried out the first Japanese pilot-scale CO2 sequestration project from July, 2003 to January, 2005 in Nagaoka City.Supercritical CO2 was injected into an onshore saline aquifer at a depth of 1,100m. CO2 was injected at a rate of 10,400 tonnes. 'Mid Niigata Prefecture Earthquake in 2004' (Mw6.6) and 'The Niigataken Chuetsu-oki Earthquake in 2007' (Mw6.6) occurred during the CO2 injection-test and after the completion of injection-test. Japan is one of the world's major countries with frequent earthquakes.This paper presents a result of seismic response analysis, and reports of seismic hazard evaluation of a reservoir and a caprock. In advance of dynamic response analysis, the earthquake motion recorded on the earth surface assumed the horizontally layer model, and set up the input wave from a basement layer by SHAKE ( = One-Dimensional Seismic Response Analysis). This wave was inputted into the analysis model and the equation of motion was solved using the direct integral calculus by Newmark Beta Method. In Seismic Response Analysis, authors have used Multiple Yield Model (MYM, Iwata, et al., 2013), which can respond also to complicated geological structure. The intensity deformation property of the foundation added the offloading characteristic to the composition rule of Duncan-Chang model in consideration of confining stress dependency, and used for and carried out the nonlinear repetition model. And the deformation characteristic which made it depend on confining stress with the cyclic loadings and un-loadings, and combined Mohr-Coulomb's law as a strength characteristic.The maximum dynamic shearing strain of caprock was generated about 1.1E-04 after the end of an earthquake. Although the dynamic safety factor was 1.925 on the beginning, after the end of an earthquake fell 0.05 point. The dynamic safety factor of reservoir fell to 1.20 from 1.29. As a result of CO2 migration monitoring by the seismic cross-hole tomography, CO2 has stopped in the reservoir through two earthquakes till the present after injection, and the leak is not accepted till the present. By the result of seismic response simulation, it turned out that the stability of the foundation is not spoiled after the earthquake.
NASA Astrophysics Data System (ADS)
Durand, Virginie; Mangeney, Anne; Lebouteiller, Pauline; Hibert, Clément; Ovpf Team
2015-04-01
The seismic and photogrammetric networks of the volcano of the Piton de la Fournaise (La Réunion Island), maintained by the OVPF, are well appropriate for the study of seismic signals generated by rockfalls. In this work, we focus on the signals generated by rockfalls occurring in the Dolomieu crater. The aim of this study is to understand the link between rockfall and volcanic activity. One key question is as to whether the number and characteristics of rockfalls can provide a precursor to the occurrence of an eruption. Another scope of this work is to determine if there is a link between the rockfall activity and the precipitations, changes of temperature and seismic activity. For this, we analyze the rockfall activity preceding the June 2014 eruption. To detect the events, we use a method based on the Kurtosis function that picks the beginning of the signals. Then we localize the events using the arrival time of the waves and a propagation model computed with the Fast Marching Method. Finally, we calculate the seismic energy generated by these rockfalls. Thus, we obtain a catalog of events that we can exploit to determine the characteristics and the temporal evolution of the rockfall activity in the Dolomieu crater. A power law is observed between the seismic energy and the duration of rockfalls, making possible to calculate the rockfall volume from the ratio between seismic and potential energy. From previous studies on the Piton de la Fournaise volcano, we can infer that rockfall activity in the crater is correlated with eruptions: the rockfall activity seems to begin before the eruption time. We compare the spatio-temporal changes of the rockfall characteristics to the volcanic, seismic, and rain activity. We show in particular that the rockfall size seems to be different if the intrusion of magma reaches the surface or not, providing potential precursors to the occurrence of an eruption.
Uncertainties in evaluation of hazard and seismic risk
NASA Astrophysics Data System (ADS)
Marmureanu, Gheorghe; Marmureanu, Alexandru; Ortanza Cioflan, Carmen; Manea, Elena-Florinela
2015-04-01
Two methods are commonly used for seismic hazard assessment: probabilistic (PSHA) and deterministic(DSHA) seismic hazard analysis.Selection of a ground motion for engineering design requires a clear understanding of seismic hazard and risk among stakeholders, seismologists and engineers. What is wrong with traditional PSHA or DSHA ? PSHA common used in engineering is using four assumptions developed by Cornell in 1968:(1)-Constant-in-time average occurrence rate of earthquakes; (2)-Single point source; (3).Variability of ground motion at a site is independent;(4)-Poisson(or "memory - less") behavior of earthquake occurrences. It is a probabilistic method and "when the causality dies, its place is taken by probability, prestigious term meant to define the inability of us to predict the course of nature"(Nils Bohr). DSHA method was used for the original design of Fukushima Daichii, but Japanese authorities moved to probabilistic assessment methods and the probability of exceeding of the design basis acceleration was expected to be 10-4-10-6 . It was exceeded and it was a violation of the principles of deterministic hazard analysis (ignoring historical events)(Klügel,J,U, EGU,2014, ISSO). PSHA was developed from mathematical statistics and is not based on earthquake science(invalid physical models- point source and Poisson distribution; invalid mathematics; misinterpretation of annual probability of exceeding or return period etc.) and become a pure numerical "creation" (Wang, PAGEOPH.168(2011),11-25). An uncertainty which is a key component for seismic hazard assessment including both PSHA and DSHA is the ground motion attenuation relationship or the so-called ground motion prediction equation (GMPE) which describes a relationship between a ground motion parameter (i.e., PGA,MMI etc.), earthquake magnitude M, source to site distance R, and an uncertainty. So far, no one is taking into consideration strong nonlinear behavior of soils during of strong earthquakes. But, how many cities, villages, metropolitan areas etc. in seismic regions are constructed on rock? Most of them are located on soil deposits? A soil is of basic type sand or gravel (termed coarse soils), silt or clay (termed fine soils) etc. The effect on nonlinearity is very large. For example, if we maintain the same spectral amplification factor (SAF=5.8942) as for relatively strong earthquake on May 3,1990(MW=6.4),then at Bacǎu seismic station for Vrancea earthquake on May 30,1990 (MW =6.9) the peak acceleration has to be a*max =0.154g and the actual recorded was only, amax =0.135g(-14.16%). Also, for Vrancea earthquake on August 30,1986(MW=7.1),the peak acceleration has to be a*max = 0.107g instead of real value recorded of 0.0736 g(- 45.57%). There are many data for more than 60 seismic stations. There is a strong nonlinear dependence of SAF with earthquake magnitude in each site. The authors are coming with an alternative approach called "real spectral amplification factors" instead of GMPE for all extra-Carpathian area where all cities and villages are located on soil deposits. Key words: Probabilistic Seismic Hazard; Uncertainties; Nonlinear seismology; Spectral amplification factors(SAF).
A seismic fault recognition method based on ant colony optimization
NASA Astrophysics Data System (ADS)
Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong
2018-05-01
Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.
Probabilistic Seismic Hazard Assessment for Iraq Using Complete Earthquake Catalogue Files
NASA Astrophysics Data System (ADS)
Ameer, A. S.; Sharma, M. L.; Wason, H. R.; Alsinawi, S. A.
2005-05-01
Probabilistic seismic hazard analysis (PSHA) has been carried out for Iraq. The earthquake catalogue used in the present study covers an area between latitude 29° 38.5° N and longitude 39° 50° E containing more than a thousand events for the period 1905 2000. The entire Iraq region has been divided into thirteen seismogenic sources based on their seismic characteristics, geological setting and tectonic framework. The completeness of the seismicity catalogue has been checked using the method proposed by Stepp (1972). The analysis of completeness shows that the earthquake catalogue is not complete below Ms=4.8 for all of Iraq and seismic source zones S1, S4, S5, and S8, while it varies for the other seismic zones. A statistical treatment of completeness of the data file was carried out in each of the magnitude classes. The Frequency Magnitude Distributions (FMD) for the study area including all seismic source zones were established and the minimum magnitude of complete reporting (Mc) were then estimated. For the entire Iraq the Mc was estimated to be about Ms=4.0 while S11 shows the lowest Mc to be about Ms=3.5 and the highest Mc of about Ms=4.2 was observed for S4. The earthquake activity parameters (activity rate λ, b value, maximum regional magnitude mmax) as well as the mean return period (R) with a certain lower magnitude mmin ≥ m along with their probability of occurrence have been determined for all thirteen seismic source zones of Iraq. The maximum regional magnitude mmax was estimated as 7.87 ± 0.86 for entire Iraq. The return period for magnitude 6.0 is largest for source zone S3 which is estimated to be 705 years while the smallest value is estimated as 9.9 years for all of Iraq.
Scaled accelerographs for design of structures in Quetta, Baluchistan, Pakistan
NASA Astrophysics Data System (ADS)
Bhatti, Abdul Qadir
2016-12-01
Structural design for seismic excitation is usually based on peak values of forces and deformations over the duration of earthquake. In determining these peak values dynamic analysis is done which requires either response history analysis (RHA), also called time history analysis, or response spectrum analysis (RSA), both of which depend upon ground motion severity. In the past, PGA has been used to describe ground motion severity, because seismic force on a rigid body is proportional to the ground acceleration. However, it has been pointed out that single highest peak on accelerograms is a very unreliable description of the accelerograms as a whole. In this study, we are considering 0.2- and 1-s spectral acceleration. Seismic loading has been defined in terms of design spectrum and time history which will lead us to two methods of dynamic analysis. Design spectrum for Quetta will be constructed incorporating the parameters of ASCE 7-05/IBC 2006/2009, which is being used by modern codes and regulation of the world like IBC 2006/2009, ASCE 7-05, ATC-40, FEMA-356 and others. A suite of time history representing design earthquake will also be prepared, this will be a helpful tool to carryout time history dynamic analysis of structures in Quetta.
Resource Assessment of Methane Hydrate in the Eastern Nankai Trough, Japan
NASA Astrophysics Data System (ADS)
Fujii, T.; Saeki, T.; Kobayashi, T.; Inamori, T.; Hayashi, M.; Takano, O.
2007-12-01
Resource assessment of methane hydrate (MH) in the eastern Nankai Trough was conducted through probabilistic approach using 2D/3D seismic survey data and drilling survey data from METI exploratory test wells 'Tokai-oki to Kumano-nada' [1, 2, 3]. We have extracted several prospective 'MH concentrated zones' [4] characterized by high resistivity in well log, strong seismic reflector, seismic high velocity, and turbidite deposit delineated by sedimentary facies analysis. The amount of methane gas contained in MH bearing layers was calculated using volumetric method for each zone. Each parameter, such as Gross Rock Volume (GRV), net-to-gross ratio (N/G), MH pore saturation (Sh), porosity, cage occupancy, and volume ratio was given as probabilistic distribution for Monte Carlo simulation, considering the uncertainly of these values. The GRV for each hydrate bearing zones was calculated from both strong seismic amplitude anomaly and velocity anomaly. Time-to-depth conversion was conducted using interval velocity derived from SVWD (Seismic Vision While Drilling). Risk factor was applied for the estimation of the GRV in 2D seismic area considering the uncertainty of seismic interpretation. The N/G was determined based on the relationship between LWD (Logging While Drilling) resistivity and grain size in zones with existing wells. 3ohm-m was used for typical cut off value to determine net intervals. Seismic facies map created by sequence stratigraphic approach [5] was also used for the determination of the N/G in zone without well controls. Porosity was estimated using density log, together with calibration by core analysis. The Sh was estimated by the combination of density log and NMR log (DMR method), together with the calibration by observed gas volume from onboard MH dissociation tests using PTCS (Pressure Temperature Core Sampler) [6]. The Sh in zone without well control was estimated using relationship between seismic P-wave interval velocity and Sh from NMR log at well location. Cage occupancy was determined to be around 0.95 by refereeing recent field observations. Total amount of methane gas in place contained in MH in the eastern Nankai Trough within survey area was estimated to be 40 tcf as Pmean value (10 tcf as P90 value, 82 tcf as P10 value). Total gas in place for MH concentrated zone was estimated to be 20 tcf (Half of total amount) as Pmean value. Sensitivity analysis indicated that the N/G and Sh have higher sensitivity than other parameters, and they are important for further detail analysis. This study was carried out as a part of Research Consortium for Methane Hydrate Resource in Japan (MH21). [1] Takahashi et al. (2005): Proc. Of 2005 OTC, 2-5 May, Houston, Texas, U.S.A. [2] Fujii et al. (2005): Proc. Of 5th ICGH, Trondheim, Norway, 974-979. [3] Tsuji et al. (2007): AAPG Special Publication (in press). [4] Saeki et al. (2007): Abst. of 2007 Technical Meeting of the JAPT, June 5-7, 2007, Tokyo, p49. [5] Takano et al. (2007): Abst. of 2007 Technical Meeting of the JAPT, June 5-7, 2007, Tokyo, p34. [6] Fujii et al. (2007): AAPG Special Publication (in press).
NASA Astrophysics Data System (ADS)
Larsen, C. F.; Bartholomaus, T. C.; O'Neel, S.; West, M. E.
2010-12-01
We observe ice motion, calving and seismicity simultaneously and with high-resolution on an advancing tidewater glacier in Icy Bay, Alaska. Icy Bay’s tidewater glaciers dominate regional glacier-generated seismicity in Alaska. Yahtse emanates from the St. Elias Range near the Bering-Bagley-Seward-Malaspina Icefield system, the most extensive glacier cover outside the polar regions. Rapid rates of change and fast flow (>16 m/d near the terminus) at Yahtse Glacier provide a direct analog to the disintegrating outlet systems in Greenland. Our field experiment co-locates GPS and seismometers on the surface of the glacier, with a greater network of bedrock seismometers surrounding the glacier. Time-lapse photogrammetry, fjord wave height sensors, and optical survey methods monitor iceberg calving and ice velocity near the terminus. This suite of geophysical instrumentation enables us to characterize glacier motion and geometry changes while concurrently listening for seismic energy release. We are performing a close examination of calving as a seismic source, and the associated mechanisms of energy transfer to seismic waves. Detailed observations of ice motion (GPS and optical surveying), glacier geometry and iceberg calving (direct observations and timelapse photogrammetry) have been made in concert with a passive seismic network. Combined, the observations form the basis of a rigorous analysis exploring the relationship between glacier-generated seismic events and motion, glacier-fiord interactions, calving and hydraulics. Our work is designed to demonstrate the applicability and utility of seismology to study the impact of climate forcing on calving glaciers.
A Probabilistic Tsunami Hazard Assessment Methodology and Its Application to Crescent City, CA
NASA Astrophysics Data System (ADS)
Gonzalez, F. I.; Leveque, R. J.; Waagan, K.; Adams, L.; Lin, G.
2012-12-01
A PTHA methodology, based in large part on Probabilistic Seismic Hazard Assessment methods (e.g., Cornell, 1968; SSHAC, 1997; Geist and Parsons, 2005), was previously applied to Seaside, OR (Gonzalez, et al., 2009). This initial version of the method has been updated to include: a revised method to estimate tidal uncertainty; an improved method for generating stochastic realizations to estimate slip distribution uncertainty (Mai and Beroza, 2002; Blair, et al., 2011); additional near-field sources in the Cascadia Subduction Zone, based on the work of Goldfinger, et al. (2012); far-field sources in Japan, based on information updated since the 3 March 2011 Tohoku tsunami (Japan Earthquake Research Committee, 2011). The GeoClaw tsunami model (Berger, et. al, 2011) is used to simulate generation, propagation and inundation. We will discuss this revised PTHA methodology and the results of its application to Crescent City, CA. Berger, M.J., D. L. George, R. J. LeVeque, and K. T. Mandli, The GeoClaw software for depth-averaged flows with adaptive refinement, Adv. Water Res. 34 (2011), pp. 1195-1206. Blair, J.L., McCrory, P.A., Oppenheimer, D.H., and Waldhauser, F. (2011): A Geo-referenced 3D model of the Juan de Fuca Slab and associated seismicity: U.S. Geological Survey Data Series 633, v.1.0, available at http://pubs.usgs.gov/ds/633/. Cornell, C. A. (1968): Engineering seismic risk analysis, Bull. Seismol. Soc. Am., 58, 1583-1606. Geist, E. L., and T. Parsons (2005): Probabilistic Analysis of Tsunami Hazards, Nat. Hazards, 37 (3), 277-314. Goldfinger, C., Nelson, C.H., Morey, A.E., Johnson, J.E., Patton, J.R., Karabanov, E., Gutiérrez-Pastor, J., Eriksson, A.T., Gràcia, E., Dunhill, G., Enkin, R.J., Dallimore, A., and Vallier, T. (2012): Turbidite event history—Methods and implications for Holocene paleoseismicity of the Cascadia subduction zone: U.S. Geological Survey Professional Paper 1661-F, 170 p. (Available at http://pubs.usgs.gov/pp/pp1661f/). González, F.I., E.L. Geist, B. Jaffe, U. Kânoglu, H. Mofjeld, C.E. Synolakis, V.V Titov, D. Arcas, D. Bellomo, D. Carlton, T. Horning, J. Johnson, J. Newman, T. Parsons, R. Peters, C. Peterson, G .Priest, A. Venturato, J. Weber, F. Wong, and A. Yalciner (2009): Probabilistic Tsunami Hazard Assessment at Seaside, Oregon, for Near- and Far-Field Seismic Sources, J. Geophys. Res., 114, C11023, doi:10.1029/2008JC005132. Japan Earthquake Research Committee, (2011): http://www.jishin.go.jp/main/p_hyoka02.htm Mai, P. M., and G. C. Beroza (2002): A spatial random field model to characterize complexity in earthquake slip, J. Geophys. Res., 107(B11), 2308, doi:10.1029/2001JB000588. SSHAC (Senior Seismic Hazard Analysis Committee) (1997): Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts, Main Report Rep. NUREG/CR-6372 UCRL-ID-122160 Vol. 1, 256 pp, U.S. Nuclear Regulatory Commission.
EMERALD: Coping with the Explosion of Seismic Data
NASA Astrophysics Data System (ADS)
West, J. D.; Fouch, M. J.; Arrowsmith, R.
2009-12-01
The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.
NASA Astrophysics Data System (ADS)
Le Goff, Boris
Seismic Hazard Analysis (PSHA), rather than the subjective methodologies that are currently used. This study focuses particularly in the definition of the seismic sources, through the seismotectonic zoning, and the determination of historical earthquake location. An important step in the Probabilistic Seismic Hazard Analysis consists in defining the seismic source model. Such a model expresses the association of the seismicity characteristics with the tectonically-active geological structures evidenced by seismotectonic studies. Given that most of the faults, in low seismic regions, are not characterized well enough, the source models are generally defined as areal zones, delimited with finite boundary polygons, within which the seismicity and the geological features are deemed homogeneous (e.g., focal depth, seismicity rate). Besides the lack of data (short period of instrumental seismicity), such a method generates different problems for regions with low seismic activity: 1) a large sensitivity of resulting hazard maps to the location of zone boundaries, while these boundaries are set by expert decisions; 2) the zoning cannot represent any variability or structural complexity in seismic parameters; 3) the seismicity rate is distributed throughout the zone and the location of the determinant information used for its calculation is lost. We investigate an alternative approach to model the seismotectonic zoning, with three main objectives: 1) obtaining a reproducible method that 2) preserves the information on the sources and extent of the uncertainties, so as to allow to propagate them (through Ground Motion Prediction Equations on to the hazard maps), and that 3) redefines the seismic source concept to debrief our knowledge on the seismogenic structures and the clustering. To do so, the Bayesian methods are favored. First, a generative model with two zones, differentiated by two different surface activity rates, was developed, creating synthetic catalogs drawn from a Poisson distribution as occurrence model, a truncated Gutenberg-Richter law as magnitudefrequency relationship and a uniform spatial distribution. The inference of this model permits to assess the minimum number of data, nmin, required in an earthquake catalog to recover the activity rates of both zones and the limit between them, with some level of accuracy. In this Bayesian model, the earthquake locations are essential. Consequently, these data have to be obtained with the best accuracy possible. The main difficulty is to reduce the location uncertainty of historical earthquakes. We propose to use the method of Bakun and Wentworth (1997) to reestimate the epicentral region of these events. This method uses directly the intensity data points rather than the isoseismal lines, set up by experts. The significant advantage in directly using individual intensity observations is that the procedures are explicit and hence the results are reproducible. The results of such a method provide an estimation of the epicentral region with levels of confidence appropriated for the number of intensity data points used. As example, we applied this methodology to the 1909 Benavente event, because of its controversial location and the particularly shape of its isoseismal lines. A new location of the 1909 Benavente event is presented in this study and the epicentral region of this event is expressed with confidence levels related to the number of intensity data points. This epicentral region is improved by the development of a new intensity-distance attenuation law, appropriate for the Portugal mainland. This law is the first one in Portugal mainland developed as a function of the magnitude (M) rather than the subjective epicentral intensity. From the logarithmic regression of each event, we define the equation form of the attenuation law. We obtained the following attenuation law: I= -1.9438 ln(D)+4.1Mw-9.5763 for 4.4 ≤ Mw ≤ 6.2 Using these attenuation laws, we reached to a magnitude estimation of the 1909 Benavente event that is in good agreement with the instrumental one. The epicentral region estimation was also improved with a tightening of the confidence level contours and a minimum of rms[MI] coming closer to the epicenter estimation of Karnik (1969). Finally, this two zone model will be a reference in the comparison with other models, which will incorporate other available data. Nevertheless, future improvements are needed to obtain a seismotectonic zoning. We emphasize that such an approach is reproducible once priors and data sets are chosen. Indeed, the objective is to incorporate expert opinions as priors, and avoid using expert decisions. Instead, the products will be directly the result of the inference, when only one model is considered, or the result of a combination of models in the Bayesian sense.
Seismic evaluation of vulnerability for SAMA educational buildings in Tehran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amini, Omid Nassiri; Amiri, Javad Vaseghi
2008-07-08
Earthquake is a destructive phenomenon that trembles different parts of the earth yearly and causes many destructions. Iran is one of the (high seismicity) quack- prone parts of the world that has received a lot of pecuniary damages and life losses each year, schools are of the most important places to be protected during such crisis.There was no special surveillance on designing and building of school's building in Tehran till the late 70's, and as Tehran is on faults, instability of such buildings may cause irrecoverable pecuniary damages and especially life losses, therefore preventing this phenomenon is in an urgentmore » need.For this purpose, some of the schools built during 67-78 mostly with Steel braced frame structures have been selected, first, by evaluating the selected Samples, gathering information and Visual Survey, the prepared questionnaires were filled out. With the use of ARIA and SABA (Venezuela) Methods, new modified combined method for qualified evaluations was found and used.Then, for quantified evaluation, with the use of computer 3D models and nonlinear statically analysis methods, a number of selected buildings of qualified evaluation, were reevaluated and finally with nonlinear dynamic analysis method the real behavior of structures on the earthquakes is studied.The results of qualified and quantified evaluations were compared and a proper Pattern for seismic evaluation of Educational buildings was presented. Otherwise the results can be a guidance for the person in charge of retrofitting or if necessary rebuilding the schools.« less
NASA Astrophysics Data System (ADS)
Cowie, Leanne; Kusznir, Nick; Horn, Brian
2014-05-01
Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been applied to the S Angolan and SE Brazilian margins to determine OCT structure, COB location and magmatic type. Knowledge of these margin parameters are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the S Angolan and SE Brazilian rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Gravity anomaly inversion, incorporating a lithosphere thermal gravity anomaly correction, has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated for profiles Lusigal 12 and ISE-01 on the Iberian margin. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola deep seismic reflection lines. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along the seismic profiles. Gravity inversion, RDA and subsidence analysis along the ION-GXT BS1-575 profile, which crosses the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin, predict the COB to be located SE of the Florianopolis Ridge. Integrated quantitative analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts oceanic crustal thicknesses of between 7 and 8 km thickness with normal oceanic basement seismic velocities and densities. Beneath the Sao Paulo Plateau and Florianopolis Ridge, joint inversion predicts crustal basement thicknesses between 10-15km with high values of basement density and seismic velocities under the Sao Paulo Plateau which are interpreted as indicating a significant magmatic component within the crustal basement. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived "synthetic" RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile suggests that exhumed mantle, corresponding to a magma poor margin, is absent..The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data, is approximately 7km consistent with the global average oceanic crustal thicknesses. The joint inversion predicts a small difference between oceanic and continental crustal basement density and seismic velocity, with the change in basement density and velocity corresponding to the COB independently determined from RDA and subsidence analysis. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing approximately 500m of anomalous uplift attributed to mantle dynamic uplift.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field
NASA Astrophysics Data System (ADS)
Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali
2017-04-01
One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
NASA Astrophysics Data System (ADS)
Gu, C.; Li, J.; Toksoz, M. N.
2013-12-01
Induced seismicity occurs both in conventional oil/gas fields due to production and water injection and in unconventional oil/gas fields due to hydraulic fracturing. Source mechanisms of these induced earthquakes are of great importance for understanding their causes and the physics of the seismic processes in reservoirs. Previous research on the analysis of induced seismic events in conventional oil/gas fields assumed a double couple (DC) source mechanism. However, recent studies have shown a non-negligible percentage of a non-double-couple (non-DC) component of source moment tensor in hydraulic fracturing events (Šílený et al., 2009; Warpinski and Du, 2010; Song and Toksöz, 2011). In this study, we determine the full moment tensor of the induced seismicity data in a conventional oil/gas field and for hydrofrac events in an unconventional oil/gas field. Song and Toksöz (2011) developed a full waveform based complete moment tensor inversion method to investigate a non-DC source mechanism. We apply this approach to the induced seismicity data from a conventional gas field in Oman. In addition, this approach is also applied to hydrofrac microseismicity data monitored by downhole geophones in four wells in US. We compare the source mechanisms of induced seismicity in the two different types of gas fields and explain the differences in terms of physical processes.
Using Network Theory to Understand Seismic Noise in Dense Arrays
NASA Astrophysics Data System (ADS)
Riahi, N.; Gerstoft, P.
2015-12-01
Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.
Waldner, J.S.; Hall, D.W.; Uptegrove, J.; Sheridan, R.E.; Ashley, G.M.; Esker, D.
1999-01-01
Beach replenishment serves the dual purpose of maintaining a source of tourism and recreation while protecting life and property. For New Jersey, sources for beach sand supply are increasingly found offshore. To meet present and future needs, geologic and geophysical techniques can be used to improve the identification, volume estimation, and determination of suitability, thereby making the mining and managing of this resource more effective. Current research has improved both data collection and interpretation of seismic surveys and vibracore analysis for projects investigating sand ridges offshore of New Jersey. The New Jersey Geological Survey in cooperation with Rutgers University is evaluating the capabilities of digital seismic data (in addition to analog data) to analyze sand ridges. The printing density of analog systems limits the dynamic range to about 24 dB. Digital acquisition systems with dynamic ranges above 100 dB can permit enhanced seismic profiles by trace static correction, deconvolution, automatic gain scaling, horizontal stacking and digital filtering. Problems common to analog data, such as wave-motion effects of surface sources, water-bottom reverberation, and bubble-pulse-width can be addressed by processing. More than 160 line miles of digital high-resolution continuous profiling seismic data have been collected at sand ridges off Avalon, Beach Haven, and Barnegat Inlet. Digital multichannel data collection has recently been employed to map sand resources within the Port of New York/New Jersey expanded dredge-spoil site located 3 mi offshore of Sandy Hook, New Jersey. Multichannel data processing can reduce multiples, improve signal-to-noise calculations, enable source deconvolution, and generate sediment acoustic velocities and acoustic impedance analysis. Synthetic seismograms based on empirical relationships among grain size distribution, density, and velocity from vibracores are used to calculate proxy values for density and velocity. The seismograms are then correlated to the digital seismic profile to confirm reflected events. They are particularly useful where individual reflection events cannot be detected but a waveform generated by several thin lithologic units can be recognized. Progress in application of geologic and geophysical methods provides advantages in detailed sediment analysis and volumetric estimation of offshore sand ridges. New techniques for current and ongoing beach replenishment projects not only expand our knowledge of the geologic processes involved in sand ridge origin and development, but also improve our assessment of these valuable resources. These reconnaissance studies provide extensive data to the engineer regarding the suitability and quantity of sand and can optimize placement and analysis of vibracore samples.Beach replenishment serves the dual purpose of maintaining a source of tourism and recreation while protecting life and property. Research has improved both data collection and interpretation of seismic surveys and vibracore analysis for projects investigating sand ridges offshore of New Jersey. The New Jersey Geological Survey in cooperation with Rutgers University is evaluating the capabilities of digital seismic data to analyze sand ridges. The printing density of analog systems limits the dynamic range to about 24 dB. Digital acquisition systems with dynamic ranges about 100 dB can permit enhanced seismic profiles by trace static correction, deconvolution, automatic gain scaling, horizontal stacking and digital filtering.
NASA Astrophysics Data System (ADS)
Martinelli, Bruno
1990-07-01
The seismic activity of the Nevado del Ruiz volcano was monitored during August-September 1985 using a three-component portable seismograph station placed on the upper part of the volcano. The objective was to investigate the frequency content of the seismic signals and the possible sources of the volcanic tremor. The seismicity showed a wide spectrum of signals, especially at the beginning of September. Some relevant patterns from the collected records, which have been analyzed by spectrum analysis, are presented. For the purpose of analysis, the records have been divided into several categories such as long-period events, tremor, cyclic tremor episodes, and strong seismic activity on September 8, 1985. The origin of the seismic signals must be considered in relation to the dynamical and acoustical properties of fluids and the shape and dimensions of the volcano's conduits. The main results of the present experiment and analysis show that the sources of the seismic signals are within the volcanic edifice. The signal characteristics indicate that the sources lie in fluid-phase interactions rather than in brittle fracturing of solid components.
Receiver deghosting in the t-x domain based on super-Gaussianity
NASA Astrophysics Data System (ADS)
Lu, Wenkai; Xu, Ziqiang; Fang, Zhongyu; Wang, Ruiliang; Yan, Chengzhi
2017-01-01
Deghosting methods in the time-space (t-x) domain have attracted a lot of attention because of their flexibility for various source/receiver configurations. Based on the well-known knowledge that the seismic signal has a super-Gaussian distribution, we present a Super-Gaussianity based Receiver Deghosting (SRD) method in the t-x domain. In our method, we denote the upgoing wave and its ghost (downgoing wave) as a single seismic signal, and express the relationship between the upgoing wave and its ghost using two ghost parameters: the sea surface reflection coefficient and the time-shift between the upgoing wave and its ghost. For a single seismic signal, we estimate these two parameters by maximizing the super-Gaussianity of the deghosted output, which is achieved by a 2D grid search method using an adaptively predefined discrete solution space. Since usually a large number of seismic signals are mixed together in a seismic trace, in the proposed method we divide the seismic trace into overlapping frames using a sliding time window with a step of one time sample, and consider each frame as a replacement for a single seismic signal. For a 2D seismic gather, we obtain two 2D maps of the ghost parameters. By assuming that these two parameters vary slowly in the t-x domain, we apply a 2D average filter to these maps, to improve their reliability further. Finally, these deghosted outputs are merged to form the final deghosted result. To demonstrate the flexibility of the proposed method for arbitrary variable depths of the receivers, we apply it to several synthetic and field seismic datasets acquired by variable depth streamer.
Why is Probabilistic Seismic Hazard Analysis (PSHA) still used?
NASA Astrophysics Data System (ADS)
Mulargia, Francesco; Stark, Philip B.; Geller, Robert J.
2017-03-01
Even though it has never been validated by objective testing, Probabilistic Seismic Hazard Analysis (PSHA) has been widely used for almost 50 years by governments and industry in applications with lives and property hanging in the balance, such as deciding safety criteria for nuclear power plants, making official national hazard maps, developing building code requirements, and determining earthquake insurance rates. PSHA rests on assumptions now known to conflict with earthquake physics; many damaging earthquakes, including the 1988 Spitak, Armenia, event and the 2011 Tohoku, Japan, event, have occurred in regions relatively rated low-risk by PSHA hazard maps. No extant method, including PSHA, produces reliable estimates of seismic hazard. Earthquake hazard mitigation should be recognized to be inherently political, involving a tradeoff between uncertain costs and uncertain risks. Earthquake scientists, engineers, and risk managers can make important contributions to the hard problem of allocating limited resources wisely, but government officials and stakeholders must take responsibility for the risks of accidents due to natural events that exceed the adopted safety criteria.
Seismic attribute analysis for reservoir and fluid prediction, Malay Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansor, M.N.; Rudolph, K.W.; Richards, F.B.
1994-07-01
The Malay Basin is characterized by excellent seismic data quality, but complex clastic reservoir architecture. With these characteristics, seismic attribute analysis is a very important tool in exploration and development geoscience and is routinely used for mapping fluids and reservoir, recognizing and risking traps, assessment, depth conversion, well placement, and field development planning. Attribute analysis can be successfully applied to both 2-D and 3-D data as demonstrated by comparisons of 2-D and 3-D amplitude maps of the same area. There are many different methods of extracting amplitude information from seismic data, including amplitude mapping, horizon slice, summed horizon slice, isochronmore » slice, and horizon slice from AVO (amplitude versus offset) cube. Within the Malay Basin, horizon/isochron slice techniques have several advantages over simply extracting amplitudes from a picked horizon: they are much faster, permit examination of the amplitude structure of the entire cube, yield better results for weak/variable signatures, and aid summation of amplitudes. Summation in itself often yields improved results because it incorporates the signature from the entire reservoir interval, reducing any effects due to noise, mispicking, or waveform variations. Dip and azimuth attributes have been widely applied by industry for fault identification. In addition, these attributes can also be used to map signature variations associated with hydrocarbon contacts or stratigraphic changes, and this must be considered when using these attributes for structural interpretation.« less
NASA Astrophysics Data System (ADS)
Ramírez-Rojas, A.; Flores-Marquez, L. E.
2009-12-01
The short-time prediction of seismic phenomena is currently an important problem in the scientific community. In particular, the electromagnetic processes associated with seismic events take in great interest since the VAN method was implemented. The most important features of this methodology are the seismic electrical signals (SES) observed prior to strong earthquakes. SES has been observed in the electromagnetic series linked to EQs in Greece, Japan and Mexico. By mean of the so-called natural time domain, introduced by Varotsos et al. (2001), they could characterize signals of dichotomic nature observed in different systems, like SES and ionic current fluctuations in membrane channels. In this work we analyze SES observed in geoelectric time series monitored in Guerrero, México. Our analysis concern with two strong earthquakes occurred, on October 24, 1993 (M=6.6) and September 14, 1995 (M=7.3). The time series of the first one displayed a seismic electric signal six days before the main shock and for the second case the time series displayed dichotomous-like fluctuations some months before the EQ. In this work we present the first results of the analysis in natural time domain for the two cases which seems to be agreeing with the results reported by Varotsos. P. Varotsos, N. Sarlis, and E. Skordas, Practica of the Athens Academy 76, 388 (2001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zucca, J J; Walter, W R; Rodgers, A J
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring andmore » seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags contemporary research. Current monitoring practice is to use relatively simple Earth models that generally afford analytical prediction of seismic observables (see Examples of Current Monitoring Practice below). Empirical relationships or corrections to predictions are often used to account for unmodeled phenomena, such as the generation of S-waves from explosions or the effect of 3-dimensional Earth structure on wave propagation. This approach produces fast and accurate predictions in areas where empirical observations are available. However, accuracy may diminish away from empirical data. Further, much of the physics is wrapped into an empirical relationship or correction, which limits the ability to fully understand the physical processes underlying the seismic observation. Every generation of seismology researchers works toward quantitative results, with leaders who are active at or near the forefront of what has been computationally possible. While recognizing that only a 3-dimensional model can capture the full physics of seismic wave generation and propagation in the Earth, computational seismology has, until recently, been limited to simplifying model parameterizations (e.g. 1D Earth models) that lead to efficient algorithms. What is different today is the fact that the largest and fastest machines are at last capable of evaluating the effects of generalized 3D Earth structure, at levels of detail that improve significantly over past efforts, with potentially wide application. Advances in numerical methods to compute travel times and complete seismograms for 3D models are enabling new ways to interpret available data. This includes algorithms such as the Fast Marching Method (Rawlison and Sambridge, 2004) for travel time calculations and full waveform methods such as the spectral element method (SEM; Komatitsch et al., 2002, Tromp et al., 2005), higher order Galerkin methods (Kaser and Dumbser, 2006; Dumbser and Kaser, 2006) and advances in more traditional Cartesian finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less
A seismic hazard uncertainty analysis for the New Madrid seismic zone
Cramer, C.H.
2001-01-01
A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.
AVO Analysis of a Shallow Gas Accumulation in the Marmara Sea
NASA Astrophysics Data System (ADS)
Er, M.; Dondurur, D.; Çifçi, G.
2012-04-01
In recent years, Amplitude versus Offset-AVO analysis is widely used in determination and classification of gas anomalies from wide-offset seismic data. Bright spots which are among the significant factors in determining the hydrocarbon accumulations, can also be determined sucessfully using AVO analysis. A bright spot anomaly were identified on the multi-channel seismic data collected by R/V K. Piri Reis research vessel in the Marmara Sea in 2008. On prestack seismic data, the associated AVO anomalies are clearly identified on the supergathers. Near- and far-offset stack sections are plotted to show the amplitudes changes at different offsets and the bright amplitudes were observed on the far-offset stack. AVO analysis was applied to the observed bright spot anomaly following the standart data processing steps. The analysis includes the preparation of Intercept, Gradient and Fluid Factor sections of AVO attribues. Top and base boundaries of gas bearing sediment were shown by intercept - gradient crossplot method. 1D modelling was also performed to show AVO classes and models were compared with the analysis results. It is interpreted that the bright spot anomaly arises from a shallow gas accumulation. In addition, the gas saturation from P-wave velocity was also estimated by the analysis. AVO analysis indicated Class 3 and Class 4 AVO anomalies observed on the bright spot anomaly.
NASA Astrophysics Data System (ADS)
Matsuzawa, H.; Yoshizawa, K.
2017-12-01
Recent high-density broad-band seismic networks allow us to construct improved 3-D upper mantle models with unprecedented horizontal resolution using surface waves. Such dispersion measurements have been primarily based on the analysis of fundamental mode. Higher-mode information can be of help in enhancing vertical resolution of 3-D models, but their dispersion analysis is intrinsically difficult, since wave-packets of several modes are overlapped each other in an observed seismogram. In this study, we measure phase dispersion of multi-mode surface waves with an array-based analysis. Our method is modeled on a one-dimensional frequency-wavenumber method originally developed by Nolet (1975, GRL), which can be applied to a set of broadband seismic records observed in a linear array along a great circle path. Through this analysis, we can obtain a spectrogram in c-T (phase speed - period) domain, which is characterized by mode-branch dispersion curves and relative spectral powers for each mode. Synthetic experiments indicate that we can separate the modal contribution using a long linear array with typical array length of about 2000 to 4000 km. The method is applied to a large data set from USArray using nearly 400 seismic events in 2007 - 2014 with Mw 6.5 or greater. Our phase-speed maps for the fundamental-mode Love and Rayleigh waves and the first higher-mode Rayleigh waves match well with the earlier models. The phase speed maps reflect typical large-scale features of regional seismic structure in North America, but smaller-scale variations are less constrained in our model, since our measured phase speeds represent path-average features over a long path (about a few thousands kilometers). Our multi-mode dispersion measurements can also be used for the extraction of mode-branch waveforms for the first a few modes. This can be done by applying a narrow filter around the dispersion curves of a target mode in c-T spectrogram. The mode-branch waveforms can then be reconstructed based on a linear Radon transform (e.g., Luo et al., 2015, GJI). Synthetic experiments suggest that we can successfully retrieve the mode-branch waveforms for several mode branches, which can be used in the secondary analysis for constraining local-scale heterogeneity with enhanced depth resolution.
Fairchild, Gillian M.; Lane, John W.; Voytek, Emily B.; LeBlanc, Denis R.
2013-01-01
This report presents a topographic map of the bedrock surface beneath western Cape Cod, Massachusetts, that was prepared for use in groundwater-flow models of the Sagamore lens of the Cape Cod aquifer. The bedrock surface of western Cape Cod had been characterized previously through seismic refraction surveys and borings drilled to bedrock. The borings were mostly on and near the Massachusetts Military Reservation (MMR). The bedrock surface was first mapped by Oldale (1969), and mapping was updated in 2006 by the Air Force Center for Environmental Excellence (AFCEE, 2006). This report updates the bedrock-surface map with new data points collected by using a passive seismic technique based on the horizontal-to-vertical spectral ratio (HVSR) of ambient seismic noise (Lane and others, 2008) and from borings drilled to bedrock since the 2006 map was prepared. The HVSR method is based on a relationship between the resonance frequency of ambient seismic noise as measured at land surface and the thickness of the unconsolidated sediments that overlie consolidated bedrock. The HVSR method was shown by Lane and others (2008) to be an effective method for determining sediment thickness on Cape Cod owing to the distinct difference in the acoustic impedance between the sediments and the underlying bedrock. The HVSR data for 164 sites were combined with data from 559 borings to bedrock in the study area to create a spatially distributed dataset that was manually contoured to prepare a topographic map of the bedrock surface. The interpreted bedrock surface generally slopes downward to the southeast as was shown on the earlier maps by Oldale (1969) and AFCEE (2006). The surface also has complex small-scale topography characteristic of a glacially eroded surface. More information about the methods used to prepare the map is given in the pamphlet that accompanies this plate.
Phase-Shifted Based Numerical Method for Modeling Frequency-Dependent Effects on Seismic Reflections
NASA Astrophysics Data System (ADS)
Chen, Xuehua; Qi, Yingkai; He, Xilei; He, Zhenhua; Chen, Hui
2016-08-01
The significant velocity dispersion and attenuation has often been observed when seismic waves propagate in fluid-saturated porous rocks. Both the magnitude and variation features of the velocity dispersion and attenuation are frequency-dependent and related closely to the physical properties of the fluid-saturated porous rocks. To explore the effects of frequency-dependent dispersion and attenuation on the seismic responses, in this work, we present a numerical method for seismic data modeling based on the diffusive and viscous wave equation (DVWE), which introduces the poroelastic theory and takes into account diffusive and viscous attenuation in diffusive-viscous-theory. We derive a phase-shift wave extrapolation algorithm in frequencywavenumber domain for implementing the DVWE-based simulation method that can handle the simultaneous lateral variations in velocity, diffusive coefficient and viscosity. Then, we design a distributary channels model in which a hydrocarbon-saturated sand reservoir is embedded in one of the channels. Next, we calculated the synthetic seismic data to analytically and comparatively illustrate the seismic frequency-dependent behaviors related to the hydrocarbon-saturated reservoir, by employing DVWE-based and conventional acoustic wave equation (AWE) based method, respectively. The results of the synthetic seismic data delineate the intrinsic energy loss, phase delay, lower instantaneous dominant frequency and narrower bandwidth due to the frequency-dependent dispersion and attenuation when seismic wave travels through the hydrocarbon-saturated reservoir. The numerical modeling method is expected to contribute to improve the understanding of the features and mechanism of the seismic frequency-dependent effects resulted from the hydrocarbon-saturated porous rocks.
NASA Astrophysics Data System (ADS)
Melkumyan, Mikayel G.
2011-03-01
It is obvious that the problem of precise assessment and/or analysis of seismic hazard (SHA) is quite a serious issue, and seismic risk reduction considerably depends on it. It is well known that there are two approaches in seismic hazard analysis, namely, deterministic (DSHA) and probabilistic (PSHA). The latter utilizes statistical estimates of earthquake parameters. However, they may not exist in a specific region, and using PSHA it is difficult to take into account local aspects, such as specific regional geology and site effects, with sufficient precision. For this reason, DSHA is preferable in many cases. After the destructive 1988 Spitak earthquake, the SHA of the territory of Armenia has been revised and increased. The distribution pattern for seismic risk in Armenia is given. Maximum seismic risk is concentrated in the region of the capital, the city of Yerevan, where 40% of the republic's population resides. We describe the method used for conducting seismic resistance assessment of the existing reinforced concrete (R/C) buildings. Using this assessment, as well as GIS technology, the coefficients characterizing the seismic risk of destruction were calculated for almost all buildings of Yerevan City. The results of the assessment are presented. It is concluded that, presently, there is a particularly pressing need for strengthening existing buildings. We then describe non-conventional approaches to upgrading the earthquake resistance of existing multistory R/C frame buildings by means of Additional Isolated Upper Floor (AIUF) and of existing stone and frame buildings by means of base isolation. In addition, innovative seismic isolation technologies were developed and implemented in Armenia for construction of new multistory multifunctional buildings. The advantages of these technologies are listed in the paper. It is worth noting that the aforementioned technologies were successfully applied for retrofitting an existing 100-year-old bank building in Irkutsk (Russia), for retrofit design of an existing 177-year-old municipality building in Iasi (Romania) and for construction of a new clinic building in Stepanakert (Nagorno Karabakh). Short descriptions of these projects are presented. Since 1994 the total number of base and roof isolated buildings constructed, retrofitted or under construction in Armenia, has reached 32. Statistics of seismically isolated buildings are given in the paper. The number of base isolated buildings per capita in Armenia is one of the highest in the world. In Armenia, for the first time in history, retrofitting of existing buildings by base isolation was carried out without interruption in the use of the buildings. The description of different base isolated buildings erected in Armenia, as well as the description of the method of retrofitting of existing buildings which is patented in Armenia (M. G. Melkumyan, patent of the Republic of Armenia No. 579), are also given in the paper.
Seismic Analysis Capability in NASTRAN
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.
1984-01-01
Seismic analysis is a technique which pertains to loading described in terms of boundary accelerations. Earthquake shocks to buildings is the type of excitation which usually comes to mind when one hears the word seismic, but this technique also applied to a broad class of acceleration excitations which are applied at the base of a structure such as vibration shaker testing or shocks to machinery foundations. Four different solution paths are available in NASTRAN for seismic analysis. They are: Direct Seismic Frequency Response, Direct Seismic Transient Response, Modal Seismic Frequency Response, and Modal Seismic Transient Response. This capability, at present, is invoked not as separate rigid formats, but as pre-packaged ALTER packets to existing RIGID Formats 8, 9, 11, and 12. These ALTER packets are included with the delivery of the NASTRAN program and are stored on the computer as a library of callable utilities. The user calls one of these utilities and merges it into the Executive Control Section of the data deck to perform any of the four options are invoked by setting parameter values in the bulk data.
CALIBRATION OF SEISMIC ATTRIBUTES FOR RESERVOIR CHARACTERIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne D. Pennington; Horacio Acevedo; Aaron Green
2002-10-01
The project, ''Calibration of Seismic Attributes for Reservoir Calibration,'' is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, including several that are in final stages of preparation ormore » printing; one of these is a chapter on ''Reservoir Geophysics'' for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along ''phantom'' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less
Calibration of Seismic Attributes for Reservoir Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne D. Pennington
2002-09-29
The project, "Calibration of Seismic Attributes for Reservoir Characterization," is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, inlcuding several that are in final stages of preparation ormore » printing; one of these is a chapter on "Reservoir Geophysics" for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along 'phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less
Evaluating the Use of Declustering for Induced Seismicity Hazard Assessment
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Michael, A. J.
2016-12-01
The recent dramatic seismicity rate increase in the central and eastern US (CEUS) has motivated the development of seismic hazard assessments for induced seismicity (e.g., Petersen et al., 2016). Standard probabilistic seismic hazard assessment (PSHA) relies fundamentally on the assumption that seismicity is Poissonian (Cornell, BSSA, 1968); therefore, the earthquake catalogs used in PSHA are typically declustered (e.g., Petersen et al., 2014) even though this may remove earthquakes that may cause damage or concern (Petersen et al., 2015; 2016). In some induced earthquake sequences in the CEUS, the standard declustering can remove up to 90% of the sequence, reducing the estimated seismicity rate by a factor of 10 compared to estimates from the complete catalog. In tectonic regions the reduction is often only about a factor of 2. We investigate how three declustering methods treat induced seismicity: the window-based Gardner-Knopoff (GK) algorithm, often used for PSHA (Gardner and Knopoff, BSSA, 1974); the link-based Reasenberg algorithm (Reasenberg, JGR,1985); and a stochastic declustering method based on a space-time Epidemic-Type Aftershock Sequence model (Ogata, JASA, 1988; Zhuang et al., JASA, 2002). We apply these methods to three catalogs that likely contain some induced seismicity. For the Guy-Greenbrier, AR earthquake swarm from 2010-2013, declustering reduces the seismicity rate by factors of 6-14, depending on the algorithm. In northern Oklahoma and southern Kansas from 2010-2015, the reduction varies from factors of 1.5-20. In the Salton Trough of southern California from 1975-2013, the rate is reduced by factors of 3-20. Stochastic declustering tends to remove the most events, followed by the GK method, while the Reasenberg method removes the fewest. Given that declustering and choice of algorithm have such a large impact on the resulting seismicity rate estimates, we suggest that more accurate hazard assessments may be found using the complete catalog.
NASA Astrophysics Data System (ADS)
Flores-Marquez, Leticia Elsa; Ramirez Rojaz, Alejandro; Telesca, Luciano
2015-04-01
The study of two statistical approaches is analyzed for two different types of data sets, one is the seismicity generated by the subduction processes occurred at south Pacific coast of Mexico between 2005 and 2012, and the other corresponds to the synthetic seismic data generated by a stick-slip experimental model. The statistical methods used for the present study are the visibility graph in order to investigate the time dynamics of the series and the scaled probability density function in the natural time domain to investigate the critical order of the system. This comparison has the purpose to show the similarities between the dynamical behaviors of both types of data sets, from the point of view of critical systems. The observed behaviors allow us to conclude that the experimental set up globally reproduces the behavior observed in the statistical approaches used to analyses the seismicity of the subduction zone. The present study was supported by the Bilateral Project Italy-Mexico Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences, jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016.
Re-evaluation and updating of the seismic hazard of Lebanon
NASA Astrophysics Data System (ADS)
Huijer, Carla; Harajli, Mohamed; Sadek, Salah
2016-01-01
This paper presents the results of a study undertaken to evaluate the implications of the newly mapped offshore Mount Lebanon Thrust (MLT) fault system on the seismic hazard of Lebanon and the current seismic zoning and design parameters used by the local engineering community. This re-evaluation is critical, given that the MLT is located at close proximity to the major cities and economic centers of the country. The updated seismic hazard was assessed using probabilistic methods of analysis. The potential sources of seismic activities that affect Lebanon were integrated along with any/all newly established characteristics within an updated database which includes the newly mapped fault system. The earthquake recurrence relationships of these sources were developed from instrumental seismology data, historical records, and earlier studies undertaken to evaluate the seismic hazard of neighboring countries. Maps of peak ground acceleration contours, based on 10 % probability of exceedance in 50 years (as per Uniform Building Code (UBC) 1997), as well as 0.2 and 1 s peak spectral acceleration contours, based on 2 % probability of exceedance in 50 years (as per International Building Code (IBC) 2012), were also developed. Finally, spectral charts for the main coastal cities of Beirut, Tripoli, Jounieh, Byblos, Saida, and Tyre are provided for use by designers.
NASA Astrophysics Data System (ADS)
Li, Xingxing
2014-05-01
Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;
Seismic retrofit guidelines for Utah highway bridges.
DOT National Transportation Integrated Search
2009-05-01
Much of Utahs population dwells in a seismically active region, and many of the bridges connecting transportation lifelines predate the rigorous seismic design standards that have been developed in the past 10-20 years. Seismic retrofitting method...
Probabilistic Tsunami Hazard Analysis
NASA Astrophysics Data System (ADS)
Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.
2006-12-01
The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes thousands of earthquake scenarios. We have carried out preliminary tsunami hazard calculations for different return periods for western North America and Hawaii based on thousands of earthquake scenarios around the Pacific rim and along the coast of North America. We will present tsunami hazard maps for several return periods and also discuss how to use these results for probabilistic inundation and runup mapping. Our knowledge of certain types of tsunami sources is very limited (e.g. submarine landslides), but a probabilistic framework for tsunami hazard evaluation can include even such sources and their uncertainties and present the overall hazard in a meaningful and consistent way.
Seismic Response Analysis of an Unanchored Steel Tank under Horizontal Excitation
NASA Astrophysics Data System (ADS)
Rulin, Zhang; Xudong, Cheng; Youhai, Guan
2017-06-01
The seismic performance of liquid storage tank affects the safety of people’s life and property. A 3-D finite element method (FEM) model of storage tank is established, which considers the liquid-solid coupling effect. Then, the displacement and stress distribution along the tank wall is studied under El Centro earthquake. Results show that, large amplitude sloshing with long period appears on liquid surface. The elephant-foot deformation occurs near the tank bottom, and at the elephant-foot deformation position maximum hoop stress and axial stress appear. The maximum axial compressive stress is very close to the allowable critical stress calculated by the design code, and may be local buckling failure occurs. The research can provide some reference for the seismic design of storage tanks.
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
New Geophysical Techniques for Offshore Exploration.
ERIC Educational Resources Information Center
Talwani, Manik
1983-01-01
New seismic techniques have been developed recently that borrow theory from academic institutions and technology from industry, allowing scientists to explore deeper into the earth with much greater precision than possible with older seismic methods. Several of these methods are discussed, including the seismic reflection common-depth-point…
Seismic detection of increased degassing before Kīlauea's 2008 summit explosion.
Johnson, Jessica H; Poland, Michael P
2013-01-01
The 2008 explosion that started a new eruption at the summit of Kīlauea Volcano, Hawai'i, was not preceded by a dramatic increase in earthquakes nor inflation, but was associated with increases in SO2 emissions and seismic tremor. Here we perform shear wave splitting analysis on local earthquakes spanning the onset of the eruption. Shear wave splitting measures seismic anisotropy and is traditionally used to infer changes in crustal stress over time. We show that shear wave splitting may also vary due to changes in volcanic degassing. The orientation of fast shear waves at Kīlauea is usually controlled by structure, but in 2008 showed changes with increased SO2 emissions preceding the start of the summit eruption. This interpretation for changing anisotropy is supported by corresponding decreases in Vp/Vs ratio. Our result demonstrates a novel method for detecting changes in gas flux using seismic observations and provides a new tool for monitoring under-instrumented volcanoes.
Seismic detection of increased degassing before Kīlauea's 2008 summit explosion
Johnson, Jessica H.; Poland, Michael P.
2013-01-01
The 2008 explosion that started a new eruption at the summit of Kīlauea Volcano, Hawai‘i, was not preceded by a dramatic increase in earthquakes nor inflation, but was associated with increases in SO2 emissions and seismic tremor. Here we perform shear wave splitting analysis on local earthquakes spanning the onset of the eruption. Shear wave splitting measures seismic anisotropy and is traditionally used to infer changes in crustal stress over time. We show that shear wave splitting may also vary due to changes in volcanic degassing. The orientation of fast shear waves at Kīlauea is usually controlled by structure, but in 2008 showed changes with increased SO2 emissions preceding the start of the summit eruption. This interpretation for changing anisotropy is supported by corresponding decreases in Vp/Vs ratio. Our result demonstrates a novel method for detecting changes in gas flux using seismic observations and provides a new tool for monitoring under-instrumented volcanoes.
Comparing Low-Frequency Earthquakes During Triggered and Ambient Tremor in Taiwan
NASA Astrophysics Data System (ADS)
Alvarado Lara, F., Sr.; Ledezma, C., Sr.
2014-12-01
In South America, larger magnitude seismic events originate in the subduction zone between the Nazca and Continental plates, as opposed to crustal events. Crustal seismic events are important in areas very close to active fault lines; however, seismic hazard analyses incorporate crust events related to a maximum distance from the site under study. In order to use crustal events as part of a seismic hazard analysis, it is necessary to use the attenuation relationships which represent the seismic behavior of the site under study. Unfortunately, in South America the amount of compiled crustal event historical data is not yet sufficient to generate a firm regional attenuation relationship. In the absence of attenuation relationships for crustal earthquakes in the region, the conventional approach is to use attenuation relationships from other regions which have a large amount of compiled data and which have similar seismic conditions to the site under study. This practice permits the development of seismic hazard analysis work with a certain margin of accuracy. In South America, in the engineering practice, new generation attenuation relationships (NGA-W) are used among other alternatives in order to incorporate the effect of crustal events in a seismic hazard analysis. In 2014, the NGA-W Version 2 (NGA-W2) was presented with a database containing information from Taiwan, Turkey, Iran, USA, Mexico, Japan, and Alaska. This paper examines whether it is acceptable to utilize the NGA-W2 in seismic hazard analysis in South America. A comparison between response spectrums of the seismic risk prepared in accordance with NGA-W2 and actual response spectrums of crustal events from Argentina is developed in order to support the examination. The seismic data were gathered from equipment installed in the cities of Santiago, Chile and Mendoza, Argentina.
Revision of the Applicability of the NGA's in South America, Chile - Argentina.
NASA Astrophysics Data System (ADS)
Alvarado Lara, F., Sr.; Ledezma, C., Sr.
2015-12-01
In South America, larger magnitude seismic events originate in the subduction zone between the Nazca and Continental plates, as opposed to crustal events. Crustal seismic events are important in areas very close to active fault lines; however, seismic hazard analyses incorporate crust events related to a maximum distance from the site under study. In order to use crustal events as part of a seismic hazard analysis, it is necessary to use the attenuation relationships which represent the seismic behavior of the site under study. Unfortunately, in South America the amount of compiled crustal event historical data is not yet sufficient to generate a firm regional attenuation relationship. In the absence of attenuation relationships for crustal earthquakes in the region, the conventional approach is to use attenuation relationships from other regions which have a large amount of compiled data and which have similar seismic conditions to the site under study. This practice permits the development of seismic hazard analysis work with a certain margin of accuracy. In South America, in the engineering practice, new generation attenuation relationships (NGA-W) are used among other alternatives in order to incorporate the effect of crustal events in a seismic hazard analysis. In 2014, the NGA-W Version 2 (NGA-W2) was presented with a database containing information from Taiwan, Turkey, Iran, USA, Mexico, Japan, and Alaska. This paper examines whether it is acceptable to utilize the NGA-W2 in seismic hazard analysis in South America. A comparison between response spectrums of the seismic risk prepared in accordance with NGA-W2 and actual response spectrums of crustal events from Argentina is developed in order to support the examination. The seismic data were gathered from equipment installed in the cities of Santiago, Chile and Mendoza, Argentina.
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
NASA Astrophysics Data System (ADS)
Craig, M. S.; Kundariya, N.; Hayashi, K.; Srinivas, A.; Burnham, M.; Oikawa, P.
2017-12-01
Near surface geophysical surveys were conducted in the Sacramento-San Joaquin Delta for earthquake hazard assessment and to provide estimates of peat thickness for use in carbon models. Delta islands have experienced 3-8 meters of subsidence during the past century due to oxidation and compaction of peat. Projected sea level rise over the next century will contribute to an ongoing landward shift of the freshwater-saltwater interface, and increase the risk of flooding due to levee failure or overtopping. Seismic shear wave velocity (VS) was measured in the upper 30 meters to determine Uniform Building Code (UBC)/ National Earthquake Hazard Reduction Program (NEHRP) site class. Both seismic and ground penetrating radar (GPR) methods were employed to estimate peat thickness. Seismic surface wave surveys were conducted at eight sites on three islands and GPR surveys were conducted at two of the sites. Combined with sites surveyed in 2015, the new work brings the total number of sites surveyed in the Delta to twenty.Soil boreholes were made at several locations using a hand auger, and peat thickness ranged from 2.1 to 5.5 meters. Seismic surveys were conducted using the multichannel analysis of surface wave (MASW) method and the microtremor array method (MAM). On Bouldin Island, VS of the surficial peat layer was 32 m/s at a site with pure peat and 63 m/s at a site peat with higher clay and silt content. Velocities at these sites reached a similar value, about 125 m/s, at a depth of 10 m. GPR surveys were performed at two sites on Sherman Island using 100 MHz antennas, and indicated the base of the peat layer at a depth of about 4 meters, consistent with nearby auger holes.The results of this work include VS depth profiles and UBC/NEHRP site classifications. Seismic and GPR methods may be used in a complementary fashion to estimate peat thickness. The seismic surface wave method is a relatively robust method and more effective than GPR in many areas with high clay content or where surface sediments have been disturbed by human activities. GPR does however provide significantly higher resolution and better depth control in areas with suitable recording conditions.
NASA Astrophysics Data System (ADS)
de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode
2017-01-01
Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.
He, W.; Anderson, R.N.
1998-08-25
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.
He, Wei; Anderson, Roger N.
1998-01-01
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
NASA Astrophysics Data System (ADS)
Hato, M.; Inamori, T.; Matsuoka, T.; Shimizu, S.
2003-04-01
Occurrence of methane hydrates in the Nankai Trough, located off the south-eastern coast of Japan, was confirmed by the exploratory test well drilling conducted by Japan’s Ministry of International Trade and Industry in 1999. Confirmation of methane hydrate has given so big impact to the Japan's future energy strategy and scientific and technological interest was derived from the information of the coring and logging results at the well. Following the above results, Japan National Oil Corporation (JNOC) launched the national project, named as MH21, for establishing the technology of methane hydrate exploration and related technologies such as production and development. As one of the research project for evaluating the total amount of the methane hydrate, Amplitude versus Offset (AVO) was applied to the seismic data acquired in the Nankai Trough area. The main purpose of the AVO application is to evaluate the validity of delineation of methane hydrate-bearing zones. Since methane hydrate is thought to accompany with free-gas in general just below the methane hydrate-bearing zones, the AVO has a possibility of describing the presence of free-gas. The free-gas is thought to be located just below the base of methane hydrate stability zone which is characterized by the Bottom Simulating Reflectors (BSRs) on the seismic section. In this sense, AVO technology, which was developed as gas delineation tools, can be utilized for methane hydrate exploration. The result of AVO analysis clearly shows gas-related anomaly below the BSRs. Appearance of the AVO anomaly has so wide variety. Some of the anomalies might not correspond to the free-gas existence, however, some of them may show free-gas. We are now going to develop methodology to clearly discriminate free-gas from non-gas zone by integrating various types of seismic methods such as seismic inversion and seismic attribute analysis.
NASA Astrophysics Data System (ADS)
Kunz-Plapp, T.; Khazai, B.; Daniell, J. E.
2012-04-01
This paper presents a new method for modeling health impacts caused by earthquake damage which allows for integrating key social impacts on individual health and health-care systems and for implementing these impacts in quantitative systemic seismic vulnerability analysis. In current earthquake casualty estimation models, demand on health-care systems is estimated by quantifying the number of fatalities and severity of injuries based on empirical data correlating building damage with casualties. The expected number of injured people (sorted by priorities of emergency treatment) is combined together with post-earthquake reduction of functionality of health-care facilities such as hospitals to estimate the impact on healthcare systems. The aim here is to extend these models by developing a combined engineering and social science approach. Although social vulnerability is recognized as a key component for the consequences of disasters, social vulnerability as such, is seldom linked to common formal and quantitative seismic loss estimates of injured people which provide direct impact on emergency health care services. Yet, there is a consensus that factors which affect vulnerability and post-earthquake health of at-risk populations include demographic characteristics such as age, education, occupation and employment and that these factors can aggravate health impacts further. Similarly, there are different social influences on the performance of health care systems after an earthquake both on an individual as well as on an institutional level. To link social impacts of health and health-care services to a systemic seismic vulnerability analysis, a conceptual model of social impacts of earthquakes on health and the health care systems has been developed. We identified and tested appropriate social indicators for individual health impacts and for health care impacts based on literature research, using available European statistical data. The results will be used to develop a socio-physical model of systemic seismic vulnerability that enhances the further understanding of societal seismic risk by taking into account social vulnerability impacts for health and health-care system, shelter, and transportation.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
Seismic response of a full-scale wind turbine tower using experimental and numerical modal analysis
NASA Astrophysics Data System (ADS)
Kandil, Kamel Sayed Ahmad; Saudi, Ghada N.; Eltaly, Boshra Aboul-Anen; El-khier, Mostafa Mahmoud Abo
2016-12-01
Wind turbine technology has developed tremendously over the past years. In Egypt, the Zafarana wind farm is currently generating at a capacity of 517 MW, making it one of the largest onshore wind farms in the world. It is located in an active seismic zone along the west side of the Gulf of Suez. Accordingly, seismic risk assessment is demanded for studying the structural integrity of wind towers under expected seismic hazard events. In the context of ongoing joint Egypt-US research project "Seismic Risk Assessment of Wind Turbine Towers in Zafarana wind Farm Egypt" (Project ID: 4588), this paper describes the dynamic performance investigation of an existing Nordex N43 wind turbine tower. Both experimental and numerical work are illustrated explaining the methodology adopted to investigate the dynamic behavior of the tower under seismic load. Field dynamic testing of the full-scale tower was performed using ambient vibration techniques (AVT). Both frequency domain and time domain methods were utilized to identify the actual dynamic properties of the tower as built in the site. Mainly, the natural frequencies, their corresponding mode shapes and damping ratios of the tower were successfully identified using AVT. A vibration-based finite element model (FEM) was constructed using ANSYS V.12 software. The numerical and experimental results of modal analysis were both compared for matching purpose. Using different simulation considerations, the initial FEM was updated to finally match the experimental results with good agreement. Using the final updated FEM, the response of the tower under the AQABA earthquake excitation was investigated. Time history analysis was conducted to define the seismic response of the tower in terms of the structural stresses and displacements. This work is considered as one of the pioneer structural studies of the wind turbine towers in Egypt. Identification of the actual dynamic properties of the existing tower was successfully performed based on AVT. Using advanced techniques in both the field testing and the numerical investigations produced reliable FEM specific for the tested tower, which can be further used in more advanced structural investigations for improving the design of such special structures.
NASA Astrophysics Data System (ADS)
Šumanovac, Franjo; Hegedűs, Endre; Orešković, Jasna; Kolar, Saša; Kovács, Attila C.; Dudjak, Darko; Kovács, István J.
2016-06-01
Passive seismic experiment was carried out at the SW contact of the Dinarides and Pannonian basin to determine the crustal structure and velocity discontinuities. The aim of the experiment was to define the relationship between the Adriatic microplate and the Pannonian segment as a part of the European plate. Most of the temporary seismic stations were deployed in Croatia along the Alp07 profile-a part of the active-source ALP 2002 project. About 300-km-long profile stretches from Istra peninsula to the Drava river, in a WSW-ESE direction. Teleseismic events recorded on 13 temporary seismic stations along the profile were analysed by P-receiver function method. Two types of characteristic receiver functions (RF) have been identified, belonging to Dinaridic and Pannonian crusts as defined on the Alp07 profile, while in transitional zone there are both types. Three major crustal discontinuities can be identified for the Dinaridic type: sedimentary basement, intracrustal discontinuity and Mohorovičić discontinuity, whereas the Pannonian type revealed only two discontinuities. The intracrustal discontinuity was not observed in the Pannonian type, thus pointing to a single-layered crust in the Pannonian basin. Two interpretation methods were applied: forward modelling of the receiver functions and H-κ stacking method, and the results were compared with the active-source seismic data at deep refraction profile Alp07. The receiver function modelling has given reliable results of the Moho depths that are in accordance with the seismic refraction results at the end of the Alp07 profile, that is in the area of Pannonian crust characterized by simple crustal structure and low seismic velocities (Vp between 5.9 and 6.2 km s-1). In the Dinarides and its peripheral parts, receiver function modelling regularly gives greater Moho depths, up to +15 per cent, due to more complex crustal structure. The depths of the Moho calculated by the H-κ stacking method vary within wide limits (±13 km), due to band limited data of short-period stations. The results at five stations have to be rejected because of huge deviations in comparison with all previous results, while at the other seven stations the Moho depths vary within ±15 per cent around the Moho discontinuity of the Alp07 profile.
NASA Astrophysics Data System (ADS)
Wang, Qian; Gao, Jinghuai
2018-02-01
As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.
Poor boy 3D seismic effort yields South Central Kentucky discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanratty, M.
1996-11-04
Clinton County, Ky., is on the eastern flank of the Cincinnati arch and the western edge of the Appalachian basin and the Pine Mountain overthrust. Clinton County has long been known for high volume fractured carbonate wells. The discovery of these fractured reservoir, unfortunately, has historically been serendipitous. The author currently uses 2D seismic and satellite imagery to design 3D high resolution seismic shoots. This method has proven to be the most efficient and is the core of his program. The paper describes exploration methods, seismic acquisition, well data base, and seismic interpretation.
Studies Of Infrasonic Propagation Using Dense Seismic Networks
NASA Astrophysics Data System (ADS)
Hedlin, M. A.; deGroot-Hedlin, C. D.; Drob, D. P.
2011-12-01
Although there are approximately 100 infrasonic arrays worldwide, more than ever before, the station density is still insufficient to provide validation for detailed propagation modeling. Relatively large infrasonic signals can be observed on seismic channels due to coupling at the Earth's surface. Recent research, using data from the 70-km spaced 400-station USArray and other seismic network deployments, has shown the value of dense seismic network data for filling in the gaps between infrasonic arrays. The dense sampling of the infrasonic wavefield has allowed us to observe complete travel-time branches of infrasound and address important research problems in infrasonic propagation. We present our analysis of infrasound created by a series of rocket motor detonations that occurred at the UTTR facility in Utah in 2007. These data were well recorded by the USArray seismometers. We use the precisely located blasts to assess the utility of G2S mesoscale models and methods to synthesize infrasonic propagation. We model the travel times of the branches using a ray-based approach and the complete wavefield using a FDTD algorithm. Although results from both rays and FDTD approaches predict the travel times to within several seconds, only about 40% of signals are predicted using rays largely due to penetration of sound into shadow zones. FDTD predicts some sound penetration into the shadow zone, but the observed shadow zones, as defined by the seismic data, have considerably narrower spatial extent than either method predicts, perhaps due to un-modeled small-scale structure in the atmosphere.
Revealing small-scale diffracting discontinuities by an optimization inversion algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Small-scale diffracting geologic discontinuities play a significant role in studying carbonate reservoirs. The seismic responses of them are coded in diffracted/scattered waves. However, compared with reflections, the energy of these valuable diffractions is generally one or even two orders of magnitude weaker. This means that the information of diffractions is strongly masked by reflections in the seismic images. Detecting the small-scale cavities and tiny faults from the deep carbonate reservoirs, mainly over 6 km, poses an even bigger challenge to seismic diffractions, as the signals of seismic surveyed data are weak and have a low signal-to-noise ratio (SNR). After analyzing the mechanism of the Kirchhoff migration method, the residual of prestack diffractions located in the neighborhood of the first Fresnel aperture is found to remain in the image space. Therefore, a strategy for extracting diffractions in the image space is proposed and a regularized L 2-norm model with a smooth constraint to the local slopes is suggested for predicting reflections. According to the focusing conditions of residual diffractions in the image space, two approaches are provided for extracting diffractions. Diffraction extraction can be directly accomplished by subtracting the predicted reflections from seismic imaging data if the residual diffractions are focused. Otherwise, a diffraction velocity analysis will be performed for refocusing residual diffractions. Two synthetic examples and one field application demonstrate the feasibility and efficiency of the two proposed methods in detecting the small-scale geologic scatterers, tiny faults and cavities.
Machine Learning Method for Pattern Recognition in Volcano Seismic Spectra
NASA Astrophysics Data System (ADS)
Radic, V.; Unglert, K.; Jellinek, M.
2016-12-01
Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as Self-Organizing Maps (SOM), Principal Component Analysis (PCA) and clustering methods can help to quickly and automatically identify important patterns related to impending eruptions. In this study we develop and evaluate an algorithm applied on a set of synthetic volcano seismic spectra as well as observed spectra from Kılauea Volcano, Hawai`i. Our goal is to retrieve a set of known spectral patterns that are associated with dominant phases of volcanic tremor before, during, and after periods of volcanic unrest. The algorithm is based on training a SOM on the spectra and then identifying local maxima and minima on the SOM 'topography'. The topography is derived from the first two PCA modes so that the maxima represent the SOM patterns that carry most of the variance in the spectra. Patterns identified in this way reproduce the known set of spectra. Our results show that, regardless of the level of white noise in the spectra, the algorithm can accurately reproduce the characteristic spectral patterns and their occurrence in time. The ability to rapidly classify spectra of volcano seismic data without prior knowledge of the character of the seismicity at a given volcanic system holds great potential for real time or near-real time applications, and thus ultimately for eruption forecasting.
Tsunamis hazard assessment and monitoring for the Back Sea area
NASA Astrophysics Data System (ADS)
Partheniu, Raluca; Ionescu, Constantin; Constantin, Angela; Moldovan, Iren; Diaconescu, Mihail; Marmureanu, Alexandru; Radulian, Mircea; Toader, Victorin
2016-04-01
NIEP has improved lately its researches regarding tsunamis in the Black Sea. As part of the routine earthquake and tsunami monitoring activity, the first tsunami early-warning system in the Black Sea has been implemented in 2013 and is active during these last years. In order to monitor the seismic activity of the Black Sea, NIEP is using a total number of 114 real time stations and 2 seismic arrays, 18 of the stations being located in Dobrogea area, area situated in the vicinity of the Romanian Black Sea shore line. Moreover, there is a data exchange with the Black Sea surrounding countries involving the acquisition of real-time data for 17 stations from Bulgaria, Turkey, Georgia and Ukraine. This improves the capability of the Romanian Seismic Network to monitor and more accurately locate the earthquakes occurred in the Black Sea area. For tsunamis monitoring and warning, a number of 6 sea level monitoring stations, 1 infrasound barometer, 3 offshore marine buoys and 7 GPS/GNSS stations are installed in different locations along and near the Romanian shore line. In the framework of ASTARTE project, few objectives regarding the seismic hazard and tsunami waves height assessment for the Black Sea were accomplished. The seismic hazard estimation was based on statistical studies of the seismic sources and their characteristics, compiled using different seismic catalogues. Two probabilistic methods were used for the evaluation of the seismic hazard, the Cornell method, based on the Gutenberg Richter distribution parameters, and Gumbel method, based on extremes statistic. The results show maximum values of possible magnitudes and their recurrence periods, for each seismic source. Using the Tsunami Analysis Tool (TAT) software, a set of tsunami modelling scenarios have been generated for Shabla area, the seismic source that could mostly affect the Romanian shore. These simulations are structured in a database, in order to set maximum possible tsunami waves that could be generated and to establish minimum magnitude values that could trigger tsunamis in this area. Some particularities of Shabla source are: past observed magnitudes > 7 and a recurrence period of 175 years. Some other important objectives of NIEP are to continue the monitoring of the seismic activity of the Black Sea, to improve the data base of the tsunami simulations for this area, near real time fault plane solution estimations used for the warning system, and to add new seismic, GPS/GNSS and sea level monitoring equipment to the existing network. Acknowledgements: This work was partially supported by the FP7 FP7-ENV2013 6.4-3 "Assessment, Strategy And Risk Reduction For Tsunamis in Europe" (ASTARTE) Project 603839/2013 and PNII, Capacity Module III ASTARTE RO Project 268/2014. This work was partially supported by the "Global Tsunami Informal Monitoring Service - 2" (GTIMS2) Project, JRC/IPR/2015/G.2/2006/NC 260286, Ref. Ares (2015)1440256 - 01.04.2015.
Revision of IRIS/IDA Seismic Station Metadata
NASA Astrophysics Data System (ADS)
Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.
2017-12-01
Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.
Analysis of the Earthquake Impact towards water-based fire extinguishing system
NASA Astrophysics Data System (ADS)
Lee, J.; Hur, M.; Lee, K.
2015-09-01
Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.
NASA Astrophysics Data System (ADS)
Okay, S.; Cifci, G.; Ozel, S.; Atgin, O.; Ozel, O.; Barin, B.; Er, M.; Dondurur, D.; Kucuk, M.; Gurcay, S.; Choul Kim, D.; Sung-Ho, B.
2012-04-01
Recently, the continental margins of Black Sea became important for its gas content. There are no scientific researches offshore Trabzon-Giresun area except the explorations of oil companies. This is the first survey that performed in that area. 1700 km high resolution multichannel seismic and chirp data simultaneously were collected onboard R/V K.Piri Reis . The seismic data reveal BSRs, bright spots and acoustic maskings especially on the eastern part of the survey area. The survey area in the Eastern Black Sea includes continental slope, apron and deep basin. Two mud volcanoes are discovered and named as Busan and Izmir. The observed fold belt is believed to be the main driving force for the growth of mud volcanoes.Faults are developed at the flanks of diapiric uplift. Seismic attributes and AVO analysis are applied to 9 seismic sections which have probable gassy sediments and BSR zones. In the seismic attribute analysis high amplitude horzions with reverse polarity are observed in instantaneous frequency, envelope and apparent polarity sections also with low frequency at instantaneous frequency sections. These analysis verify existence of gas accumulations in the sediments. AVO analysis and cross section drawing and Gradient analysis show Class 1 AVO anomaly and indicate gas in sediments. Keywords: BSR, Bright spot, Mud volcano, Seismic Attributes, AVO
MASW Seismic Method in Brebu Landslide Area, Romania
NASA Astrophysics Data System (ADS)
Mihai, Marinescu; Paul, Cristea; Cristian, Marunteanu; Matei, Mezincescu
2017-12-01
This paper is focused on assessing the possibility of enhancing the geotechnical information in perimeters with landslides, especially through applications of the Multichannel Analysis of Surface Waves (MASW) method. The technology enables the determination of the phase velocities of Rayleigh waves and, recursively, the evaluation of shear wave velocities (Vs) related to depth. Finally, using longitudinal wave velocities (Vp), derived from the seismic refraction measurements, in situ dynamic elastic properties in a shallow section can be obtained. The investigation was carried out in the Brebu landslide (3-5 m depth of bedrock), located on the southern flank of the Slanic Syncline (110 km North of Bucharest) and included a drilling program and geotechnical laboratory observations. The seismic refraction records (seismic sources placed at the centre, ends and outside of the geophone spread) have been undertaken on two lines, 23 m and 46 m long respectively) approximately perpendicular to the downslope direction of the landslide and on different local morpho-structures. A Geode Geometrics seismograph was set for 1 ms sampling rate and pulse summations in real-time for five blows. Twenty-four vertical Geometrics SpaceTech geophones (14 Hz resonance frequency) were disposed at 1 m spacing. The seismic source was represented by the impact of an 8kg weight sledge hammer on a metal plate. Regarding seismic data processing, the distinctive feature is related to performing more detailed analyses of MASW records. The proposed procedure consists of the spread split in groups with fewer receivers and several interval-geophones superposed. 2D Fourier analysis, f-k (frequency-wave number) spectrum, for each of these groups assures the information continuity and, all the more, accuracy to pick out the amplitude maximums of the f-k spectra. Finally, combining both values VS (calculated from 2D spectral analyses of Rayleigh waves) and VP (obtained from seismic refraction records) plots of mean geodynamic parameter evolution related to depth were constructed. Parameter value differentiations referring to slope stability are revealed. Lowest values of VS and both shear and longitudinal elastic moduli are defined for the area with landslide rockmass, in opposition with stable land for which the biggest values for same parameters are revealed. Intermediate values are signalized above the main plane of sliding, zone classified unstable.
Source-Type Identification Analysis Using Regional Seismic Moment Tensors
NASA Astrophysics Data System (ADS)
Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.
2012-12-01
Waveform inversion to determine the seismic moment tensor is a standard approach in determining the source mechanism of natural and manmade seismicity, and may be used to identify, or discriminate different types of seismic sources. The successful applications of the regional moment tensor method at the Nevada Test Site (NTS) and the 2006 and 2009 North Korean nuclear tests (Ford et al., 2009a, 2009b, 2010) show that the method is robust and capable for source-type discrimination at regional distances. The well-separated populations of explosions, earthquakes and collapses on a Hudson et al., (1989) source-type diagram enables source-type discrimination; however the question remains whether or not the separation of events is universal in other regions, where we have limited station coverage and knowledge of Earth structure. Ford et al., (2012) have shown that combining regional waveform data and P-wave first motions removes the CLVD-isotropic tradeoff and uniquely discriminating the 2009 North Korean test as an explosion. Therefore, including additional constraints from regional and teleseismic P-wave first motions enables source-type discrimination at regions with limited station coverage. We present moment tensor analysis of earthquakes and explosions (M6) from Lop Nor and Semipalatinsk test sites for station paths crossing Kazakhstan and Western China. We also present analyses of smaller events from industrial sites. In these sparse coverage situations we combine regional long-period waveforms, and high-frequency P-wave polarity from the same stations, as well as from teleseismic arrays to constrain the source type. Discrimination capability with respect to velocity model and station coverage is examined, and additionally we investigate the velocity model dependence of vanishing free-surface traction effects on seismic moment tensor inversion of shallow sources and recovery of explosive scalar moment. Our synthetic data tests indicate that biases in scalar seismic moment and discrimination for shallow sources are small and can be understood in a systematic manner. We are presently investigating the frequency dependence of vanishing traction of a very shallow (10m depth) M2+ chemical explosion recorded at several kilometer distances, and preliminary results indicate at the typical frequency passband we employ the bias does not affect our ability to retrieve the correct source mechanism but may affect the retrieval of the correct scalar seismic moment. Finally, we assess discrimination capability in a composite P-value statistical framework.
NASA Astrophysics Data System (ADS)
Calò, M.; Parisi, L.
2014-10-01
Sicily Channel is a portion of Mediterranean Sea, between Sicily (Southern Italy) and Tunisia, representing a part of the foreland Apennine-Maghrebian thrust belt. The seismicity of the region is commonly associated with the normal faulting related to the rifting process and volcanic activity of the region. However, certain seismic patterns suggest the existence of some mechanism coexisting with the rifting process. In this work, we present the results of a statistical analysis of the instrumental seismicity and a reliable relocalization of the events recorded in the last 30 yr in the Sicily Channel and western Sicily using the Double Difference method and 3-D Vp and Vs tomographic models. Our procedure allows us to discern the seismic regime of the Sicily sea from the Tyrrhenian one and to describe the main features of an active fault zone in the study area that could not be related to the rifting process. We report that most of the events are highly clustered in the region between 12.5°-13.5°E and 35.5°-37°N with hypocentral depth of 5-40 km, and reaching 70 km depth in the southernmost sector. The alignment of the seismic clusters, the distribution of volcanic and geothermal regions and the location of some large events occurred in the last century suggest the existence of a subvertical shear zone extending for least 250 km and oriented approximately NNE-SSW. The spatial distribution of the seismic moment suggests that this transfer fault zone is seismically discontinuous showing large seismic gaps in proximity of the Ferdinandea Island, and Graham and Nameless Bank.
McBride, J.H.; Stephenson, W.J.; Williams, R.A.; Odum, J.K.; Worley, D.M.; South, J.V.; Brinkerhoff, A.R.; Keach, R.W.; Okojie-Ayoro, A. O.
2010-01-01
Integrated vibroseis compressional and experimental hammer-source, shear-wave, seismic reflection profiles across the Provo segment of the Wasatch fault zone in Utah reveal near-surface and shallow bedrock structures caused by geologically recent deformation. Combining information from the seismic surveys, geologic mapping, terrain analysis, and previous seismic first-arrival modeling provides a well-constrained cross section of the upper ~500 m of the subsurface. Faults are mapped from the surface, through shallow, poorly consolidated deltaic sediments, and cutting through a rigid bedrock surface. The new seismic data are used to test hypotheses on changing fault orientation with depth, the number of subsidiary faults within the fault zone and the width of the fault zone, and the utility of integrating separate elastic methods to provide information on a complex structural zone. Although previous surface mapping has indicated only a few faults, the seismic section shows a wider and more complex deformation zone with both synthetic and antithetic normal faults. Our study demonstrates the usefulness of a combined shallow and deeper penetrating geophysical survey, integrated with detailed geologic mapping to constrain subsurface fault structure. Due to the complexity of the fault zone, accurate seismic velocity information is essential and was obtained from a first-break tomography model. The new constraints on fault geometry can be used to refine estimates of vertical versus lateral tectonic movements and to improve seismic hazard assessment along the Wasatch fault through an urban area. We suggest that earthquake-hazard assessments made without seismic reflection imaging may be biased by the previous mapping of too few faults. ?? 2010 Geological Society of America.
Contemporary Tectonics of China
1978-02-01
that it would be of value to the United States to understand seismicity in China because their methods used in predicting large intraplate seismic...ability to discriminate between natural events and nuclear explosions. General Method In order to circumvent the limitations placed on studies of...accurate relative locations. Fault planes maybe determined with this method , thereby removing the ambiguity of the choice of fault plane from a fault plane
NASA Astrophysics Data System (ADS)
Volpe, M.; Selva, J.; Tonini, R.; Romano, F.; Lorito, S.; Brizuela, B.; Argyroudis, S.; Salzano, E.; Piatanesi, A.
2016-12-01
Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) is a methodology to assess the exceedance probability for different thresholds of tsunami hazard intensity, at a specific site or region in a given time period, due to a seismic source. A large amount of high-resolution inundation simulations is typically required for taking into account the full variability of potential seismic sources and their slip distributions. Starting from regional SPTHA offshore results, the computational cost can be reduced by considering for inundation calculations only a subset of `important' scenarios. We here use a method based on an event tree for the treatment of the seismic source aleatory variability; a cluster analysis on the offshore results to define the important sources; epistemic uncertainty treatment through an ensemble modeling approach. We consider two target sites in the Mediterranean (Milazzo, Italy, and Thessaloniki, Greece) where coastal (non nuclear) critical infrastructures (CIs) are located. After performing a regional SPTHA covering the whole Mediterranean, for each target site, few hundreds of representative scenarios are filtered out of all the potential seismic sources and the tsunami inundation is explicitly modeled, obtaining a site-specific SPTHA, with a complete characterization of the tsunami hazard in terms of flow depth and velocity time histories. Moreover, we also explore the variability of SPTHA at the target site accounting for coseismic deformation (i.e. uplift or subsidence) due to near field sources located in very shallow water. The results are suitable and will be applied for subsequent multi-hazard risk analysis for the CIs. These applications have been developed in the framework of the Italian Flagship Project RITMARE, EC FP7 ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389) projects, and of the INGV-DPC Agreement.
NASA Astrophysics Data System (ADS)
Chan, J. H.; Richardson, I. S.; Strayer, L. M.; Catchings, R.; McEvilly, A.; Goldman, M.; Criley, C.; Sickler, R. R.
2017-12-01
The Hayward Fault Zone (HFZ) includes the Hayward fault (HF), as well as several named and unnamed subparallel, subsidiary faults to the east, among them the Quaternary-active Chabot Fault (CF), the Miller Creek Fault (MCF), and a heretofore unnamed fault, the Redwood Thrust Fault (RTF). With an ≥M6.0 recurrence interval of 130 y for the HF and the last major earthquake in 1868, the HFZ is a major seismic hazard in the San Francisco Bay Area, exacerbated by the many unknown and potentially active secondary faults of the HFZ. In 2016, researchers from California State University, East Bay, working in concert with the United States Geological Survey conducted the East Bay Seismic Investigation (EBSI). We deployed 296 RefTek RT125 (Texan) seismographs along a 15-km-long linear seismic profile across the HF, extending from the bay in San Leandro to the hills in Castro Valley. Two-channel seismographs were deployed at 100 m intervals to record P- and S-waves, and additional single-channel seismographs were deployed at 20 m intervals where the seismic line crossed mapped faults. The active-source survey consisted of 16 buried explosive shots located at approximately 1-km intervals along the seismic line. We used the Multichannel Analysis of Surfaces Waves (MASW) method to develop 2-D shear-wave velocity models across the CF, MCF, and RTF. Preliminary MASW analysis show areas of anomalously low S-wave velocities , indicating zones of reduced shear modulus, coincident with these three mapped faults; additional velocity anomalies coincide with unmapped faults within the HFZ. Such compliant zones likely correspond to heavily fractured rock surrounding the faults, where the shear modulus is expected to be low compared to the undeformed host rock.
Surface-Wave Relocation of Remote Continental Earthquakes
NASA Astrophysics Data System (ADS)
Kintner, J. A.; Ammon, C. J.; Cleveland, M.
2017-12-01
Accurate hypocenter locations are essential for seismic event analysis. Single-event location estimation methods provide relatively imprecise results in remote regions with few nearby seismic stations. Previous work has demonstrated that improved relative epicentroid precision in oceanic environments is obtainable using surface-wave cross correlation measurements. We use intermediate-period regional and teleseismic Rayleigh and Love waves to estimate relative epicentroid locations of moderately-sized seismic events in regions around Iran. Variations in faulting geometry, depth, and intermediate-period dispersion make surface-wave based event relocation challenging across this broad continental region. We compare and integrate surface-wave based relative locations with InSAR centroid location estimates. However, mapping an earthquake sequence mainshock to an InSAR fault deformation model centroid is not always a simple process, since the InSAR observations are sensitive to post-seismic deformation. We explore these ideas using earthquake sequences in western Iran. We also apply surface-wave relocation to smaller magnitude earthquakes (3.5 < M < 5.0). Inclusion of smaller-magnitude seismic events in a relocation effort requires a shift in bandwidth to shorter periods, which increases the sensitivity of relocations to surface-wave dispersion. Frequency-domain inter-event phase observations are used to understand the time-domain cross-correlation information, and to choose the appropriate band for applications using shorter periods. Over short inter-event distances, the changing group velocity does not strongly degrade the relative locations. For small-magnitude seismic events in continental regions, surface-wave relocation does not appear simple enough to allow broad routine application, but using this method to analyze individual earthquake sequences can provide valuable insight into earthquake and faulting processes.
NASA Astrophysics Data System (ADS)
Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.
2014-12-01
Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first approximation, so that more careful studies can be conducted to assess a detailed geological characterization of a specific site. The continuous development of the methods that use microtremors, create many areas of interest in the seismic engineering study field. This and other reasons are why these methods have acquired more presence all over the globe.
NASA Astrophysics Data System (ADS)
Firtana Elcomert, Karolin; Kocaoglu, Argun
2014-05-01
Sedimentary basins affect the propagation characteristics of the seismic waves and cause significant ground motion amplification during an earthquake. While the impedance contrast between the sedimentary layer and bedrock predominantly controls the resonance frequencies and their amplitudes (seismic amplification), surface waves generated within the basin, make the waveforms more complex and longer in duration. When a dense network of weak and/or strong motion sensors is available, site effect or more specifically sedimentary basin amplification can be directly estimated experimentally provided that significant earthquakes occur during the period of study. Alternatively, site effect can be investigated through simulation of ground motion. The objective of this study is to investigate the 2-D site effect in the Izmit Basin located in the eastern Marmara region of Turkey, using the currently available bedrock topography and shear-wave velocity data. The Izmit Basin was formed in Plio-Quaternary period and is known to be a pull-apart basin controlled by the northern branch of the North Anatolian Fault Zone (Şengör et al. 2005). A thorough analysis of seismic hazard is important since the city of Izmit and its metropolitan area is located in this region. Using a spectral element code, SPECFEM2D (Komatitsch et al. 1998), this work presents some of the preliminary results of the 2-D seismic wave propagation simulations for the Izmit basin. The spectral-element method allows accurate and efficient simulation of seismic wave propagation due to its advantages over the other numerical modeling techniques by means of representation of the wavefield and the computational mesh. The preliminary results of this study suggest that seismic wave propagation simulations give some insight into the site amplification phenomena in the Izmit basin. Comparison of seismograms recorded on the top of sedimentary layer with those recorded on the bedrock show more complex waveforms with higher amplitudes on seismograms recorded at the free surface. Furthermore, modeling reveals that observed seismograms include surface waves whose excitation is clearly related to the basin geometry.
Exploring seismicity using geomagnetic and gravity data - a case study for Bulgaria
NASA Astrophysics Data System (ADS)
Trifonova, P.; Simeonova, S.; Solakov, D.; Metodiev, M.
2012-04-01
Seismicity exploration certainly requires comprehensive analysis of location, orientation and length distribution of fault and block systems with a variety of geophysical methods. In the present research capability of geomagnetic and gravity anomalous field data are used for revealing of buried structures inside the earth's upper layers. Interpretation of gravity and magnetic data is well known and often applied to delineate various geological structures such as faults, flexures, thrusts, borders of dislocated blocks etc. which create significant rock density contrast in horizontal planes. Study area of the present research covers the territory of Bulgaria which is part of the active continental margin of the Eurasian plate. This region is a typical example of high seismic risk area. The epicentral map shows that seismicity in the region is not uniformly distributed in space. Therefore the seismicity is described in distributed geographical zones (seismic source zones). Each source zone is characterized by its specific tectonic, seismic, and geological particulars. From the analysis of the depth distribution it was recognized that the earthquakes in the region occurred in the Earth's crust. Hypocenters are mainly located in the upper crust, and only a few events are related to the lower crust. The maximum depth reached is about 50 km in southwestern Bulgaria; outside, the foci affect only the surficial 30-35 km. Maximum density of seismicity involves the layer between 5 and 25 km. This fact determines the capability of potential fields data to reveal crustal structures and to examine their parameters as possible seismic sources. Results showed that a number of geophysically interpreted structures coincide with observed on the surface dislocations and epicenter clusters (well illustrated in northern Bulgaria) which confirms the reliability of the applied methodology. The complicated situation in southern Bulgaria is demonstrated by mosaics structure of geomagnetic field, complex configuration of gravity anomalies and spatial seismicity distribution. Well defined (confirmed by geophysical, geological and seismological data) are the known earthquake source zones (such as Sofia, Kresna, Maritsa, Yambol ) in this part of the territory of Bulgaria. Worth while are the results where no surface structures are present (e.g. Central Rhodope zone and East Rhodope zone, where the 2006 Kurdzhali earthquake sequence is realized). In those cases, gravity and magnetic interpretations proved to be a suitable enough technique which allows determining of position and parameters of the geological structures in depth.
Seismic performance of geosynthetic-soil retaining wall structures
NASA Astrophysics Data System (ADS)
Zarnani, Saman
Vertical inclusions of expanded polystyrene (EPS) placed behind rigid retaining walls were investigated as geofoam seismic buffers to reduce earthquake-induced loads. A numerical model was developed using the program FLAC and the model validated against 1-g shaking table test results of EPS geofoam seismic buffer models. Two constitutive models for the component materials were examined: elastic-perfectly plastic with Mohr-Coulomb (M-C) failure criterion and non-linear hysteresis damping model with equivalent linear method (ELM) approach. It was judged that the M-C model was sufficiently accurate for practical purposes. The mechanical property of interest to attenuate dynamic loads using a seismic buffer was the buffer stiffness defined as K = E/t (E = buffer elastic modulus, t = buffer thickness). For the range of parameters investigated in this study, K ≤50 MN/m3 was observed to be the practical range for the optimal design of these systems. Parametric numerical analyses were performed to generate design charts that can be used for the preliminary design of these systems. A new high capacity shaking table facility was constructed at RMC that can be used to study the seismic performance of earth structures. Reduced-scale models of geosynthetic reinforced soil (GRS) walls were built on this shaking table and then subjected to simulated earthquake loading conditions. In some shaking table tests, combined use of EPS geofoam and horizontal geosynthetic reinforcement layers was investigated. Numerical models were developed using program FLAC together with ELM and M-C constitutive models. Physical and numerical results were compared against predicted values using analysis methods found in the journal literature and in current North American design guidelines. The comparison shows that current Mononobe-Okabe (M-O) based analysis methods could not consistently satisfactorily predict measured reinforcement connection load distributions at all elevations under both static and dynamic loading conditions. The results from GRS model wall tests with combined EPS geofoam and geosynthetic reinforcement layers show that the inclusion of a EPS geofoam layer behind the GRS wall face can reduce earth loads acting on the wall facing to values well below those recorded for conventional GRS wall model configurations.
NASA Astrophysics Data System (ADS)
Bachura, Martin; Fischer, Tomas
2014-05-01
Seismic waves are attenuated by number of factors, including geometrical spreading, scattering on heterogeneities and intrinsic loss due the anelasticity of medium. Contribution of the latter two processes can be derived from the tail part of the seismogram - coda (strictly speaking S-wave coda), as these factors influence the shape and amplitudes of coda. Numerous methods have been developed for estimation of attenuation properties from the decay rate of coda amplitudes. Most of them work with the S-wave coda, some are designed for the P-wave coda (only on teleseismic distances) or for the whole waveforms. We used methods to estimate the 1/Qc - attenuation of coda waves, methods to separate scattering and intrinsic loss - 1/Qsc, Qi and methods to estimate attenuation of direct P and S wave - 1/Qp, 1/Qs. In this study, we analyzed the S-wave coda of local earthquake data recorded in the West Bohemia/Vogtland area. This region is well known thanks to the repeated occurrence of earthquake swarms. We worked with data from the 2011 earthquake swarm, which started late August and lasted with decreasing intensity for another 4 months. During the first week of swarm thousands of events were detected with maximum magnitudes ML = 3.6. Amount of high quality data (including continuous datasets and catalogues with an abundance of well-located events) is available due to installation of WEBNET seismic network (13 permanent and 9 temporary stations) monitoring seismic activity in the area. Results of the single-scattering model show seismic attenuations decreasing with frequency, what is in agreement with observations worldwide. We also found decrease of attenuation with increasing hypocentral distance and increasing lapse time, which was interpreted as a decrease of attenuation with depth (coda waves on later lapse times are generated in bigger depths - in our case in upper lithosphere, where attenuations are small). We also noticed a decrease of frequency dependence of 1/Qc with depth, where 1/Qc seems to be frequency independent in depth range of upper lithosphere. Lateral changes of 1/Qc were also reported - it decreases in the south-west direction from the Novy Kostel focal zone, where the attenuation is the highest. Results from more advanced methods that allow for separation of scattering and intrinsic loss show that intrinsic loss is a dominant factor for attenuating of seismic waves in the region. Determination of attenuation due to scattering appears ambiguous due to small hypocentral distances available for the analysis, where the effects of scattering in frequency range from 1 to 24 Hz are not significant.
NASA Astrophysics Data System (ADS)
Cauchie, Léna; Lengliné, Olivier; Schmittbuhl, Jean
2017-04-01
Abundant seismicity is generally observed during the exploitation of geothermal reservoirs, especially during phases of hydraulic stimulations. At the Enhanced Geothermal System of Soultz-Sous-Forêts in France, the induced seismicity has been thoroughly studied over the years of exploitation and the mechanism at its origin has been related to both fluid pressure increase during stimulation and aseismic creeping movements. The fluid-induced seismic events often exhibit a high degree of similarity and the mechanism at the origin of these repeated events is thought to be associated with slow slip process where asperities on the rupture zone act several times. In order to improve our knowledge on the mechanisms associated with such events and on the damaged zones involved during the hydraulic stimulations, we investigate the behaviour of the multiplets and their persistent nature, if it prevails, over several water injection intervals. For this purpose, we analysed large datasets recorded from a downhole seismic network for several water injection periods (1993, 2000, …). For each stimulation interval, thousands of events are recorded at depth. We detected the events using the continuous kurtosis-based migration method and classified them into families of comparable waveforms using an approach based on cross-correlation analysis. We obtain precise relative locations of the multiplets using differential arrival times obtained through cross-correlation of similar waveforms. Finally, the properties of the similar fluid-induced seismic events are derived (magnitude, spectral content) and examined over the several hydraulic tests. Hopefully these steps will lead to a better understanding of the repetitive nature of these events and the investigation of their persistence will outline the heterogeneities of the structures (temperatures anomalies, regional stress perturbations, fluid flow channelling) regularly involved during the different stimulations.
NASA Astrophysics Data System (ADS)
Ku, C. S.; You, S. H.; Kuo, Y. T.; Huang, B. S.; Wu, Y. M.; Chen, Y. G.; Taylor, F. W.
2015-12-01
A MW 8.1 earthquake occurred on 1 April 2007 in the western Solomon Islands. Following this event, a damaging tsunami was induced and hit the Island Gizo where the capital city of Western Province of Solomon Islands located. Several buildings of this city were destroyed and several peoples lost their lives during this earthquake. However, during this earthquake, no near source seismic instrument has been installed in this region. The seismic evaluations for the aftershock sequence, the possible earthquake early warning and tsunami warning were unavailable. For the purpose of knowing more detailed information about seismic activity in this region, we have installed 9 seismic stations (with Trillium 120PA broadband seismometer and Q330S 24bit digitizer) around the rupture zone of the 2007 earthquake since September of 2009. Within a decade, it has been demonstrated both theoretically and experimentally that the Green's function or impulse response between two seismic stations can be retrieved from the cross-correlation of ambient noise. In this study, 6 stations' observations which are more complete during 2011/10 ~ 2012/12 period, were selected for the purpose of the cross-correlation analysis of ambient seismic noise. The group velocities at period 2-20 seconds of 15 station-pairs were extracted by using multiple filter technique (MFT) method. The analyzed results of this study presented significant results of group velocities with higher frequency contents than other studies (20-60 seconds in usually cases) and opened new opportunities to study the shallow crustal structure of the western Solomon Islands.
NASA Astrophysics Data System (ADS)
Katsumata, Kei
2017-06-01
An earthquake catalog created by the International Seismological Center (ISC) was analyzed, including 3898 earthquakes located in and around Japan between January 1964 and June 2012 shallower than 60 km with the body wave magnitude of 5.0 or larger. Clustered events such as earthquake swarms and aftershocks were removed from the ISC catalog by using a stochastic declustering method based on Epidemic-Type Aftershock Sequence (ETAS) model. A detailed analysis of the earthquake catalog using a simple scanning technique (ZMAP) shows that the long-term seismic quiescences lasting more than 9 years were recognized ten times along the subduction zone in and around Japan. The three seismic quiescences among them were followed by three great earthquakes: the 1994 Hokkaido-toho-oki earthquake ( M w 8.3), the 2003 Tokachi-oki earthquake ( M w 8.3), and the 2011 Tohoku earthquake ( M w 9.0). The remaining seven seismic quiescences were followed by no earthquake with the seismic moment M 0 ≥ 3.0 × 1021 Nm ( M w 8.25), which are candidates of the false alarm. The 2006 Kurile Islands earthquake ( M w 8.3) was not preceded by the significant seismic quiescence, which is a case of the surprise occurrence. As a result, when limited to earthquakes with the seismic moment of M 0 ≥ 3.0 × 1021 Nm, four earthquakes occurred between 1976 and 2012 in and around Japan, and three of them were preceded by the long-term seismic quiescence lasting more than 9 years.
Micro-seismicity within the Coso Geothermal field, California, from 1996-2012
Kaven, Joern; Hickman, Stephen H.; Weber, Lisa C.
2017-01-01
We extend our previous catalog of seismicity within the Coso Geothermal field by adding over two and a half years of additional data to prior results. In total, we locate over 16 years of seismicity spanning from April 1996 to May of 2012 using a refined velocity model, apply it to all events and utilize differential travel times in relocations to improve the accuracy of event locations. The improved locations elucidate major structural features within the reservoir that we interpret to be faults that contribute to heat and fluid flow within the reservoir. Much of the relocated seismicity remains diffuse between these major structural features, suggesting that a large volume of accessible and distributed fracture porosity is maintained within the geothermal reservoir through ongoing brittle failure. We further track changes in b value and seismic moment release within the reservoir as a whole through time. We find that b values decrease significantly during 2009 and 2010, coincident with the occurrence of a greater number of moderate magnitude earthquakes (3.0 ≤ ML < 4.5). Analysis of spatial variations in seismic moment release between years reveals that localized seismicity tends to spread from regions of high moment release into regions with previously low moment release, akin to aftershock sequences. These results indicate that the Coso reservoir is comprised of a network of fractures at a variety of spatial scales that evolves dynamically over time, with progressive changes in characteristics of microseismicity and inferred fractures and faults that are only evident from a long period of seismic monitoring analyzed using self-consistent methods.
Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2013-12-01
Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for identifying bad array elements through a jackknifing process to isolate the anomalous channels, so that an automated analysis system might discard them prior to FK analysis and beamforming on events of interest.
Spatial Temporal Analysis Of Mine-induced Seismicity
NASA Astrophysics Data System (ADS)
Fedotova, I. V.; Yunga, S. L.
The results of analysis of influence mine-induced seismicity on state of stress of a rock mass are represented. The spatial-temporal analysis of influence of mass explosions on rock massif deformation is carried out in the territory of a mine field Yukspor of a wing of the Joined Kirovsk mine JSC "Apatite". Estimation of influence of mass explosions on a massif were determined based firstly on the parameters of natural seismicic regime, and secondly taking into consideration change of seismic energy release. After long series of explosions variations in average number of seismic events was fixed. Is proved, that with increase of a volume of rocks, involved in a deforma- tion the released energy of seismic events, and characteristic intervals of time of their preparation are also varied. At the same time, the mechanism of destruction changes also: from destruction's, of a type shift - separation before destruction's, in a quasi- solid heterogeneous massif (in oxidized zones and zones of actuated faults). Analysis of a database seismicity of a massif from 1993 to 1999 years has confirmed, that the response of a massif on explosions is connected to stress-deformations state a mas- sif and parameters of a mining working. The analysis of spatial-temporal distribution of hypocenters of seismic events has allowed to allocate migration of fissile regions of destruction after mass explosions. The researches are executed at support of the Russian foundation for basic research, - projects 00-05-64758, 01-05-65340.
NASA Astrophysics Data System (ADS)
Groby, Jean-Philippe; Wirgin, Armand
2008-02-01
We address the problem of the response to a seismic wave of an urban site consisting of Nb blocks overlying a soft layer underlain by a hard substratum. The results of a theoretical analysis, appealing to a space-frequency mode-matching (MM) technique, are compared to those obtained by a space-time finite-element (FE) technique. The two methods are shown to give rise to the same prediction of the seismic response for Nb = 1, 2 and 40 blocks. The mechanism of the interaction between blocks and the ground, as well as that of the mutual interaction between blocks, are studied. It is shown, in the first part of this paper, that the presence of a small number of blocks modifies the seismic disturbance in a manner which evokes qualitatively, but not quantitatively, what was observed during the 1985 Michoacan earthquake in Mexico City. Anomalous earthquake response at a much greater level, in terms of duration, peak and cumulative amplitude of motion, is shown, by a theoretical and numerical analysis in the second part of this paper, to be induced by the presence of a large (>=10) number of identical equi-spaced blocks that are present in certain districts of many cities.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
NASA Astrophysics Data System (ADS)
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Regional Observation of Seismic Activity in Baekdu Mountain
NASA Astrophysics Data System (ADS)
Kim, Geunyoung; Che, Il-Young; Shin, Jin-Soo; Chi, Heon-Cheol
2015-04-01
Seismic unrest in Baekdu Mountain area between North Korea and Northeast China region has called attention to geological research community in Northeast Asia due to her historical and cultural importance. Seismic bulletin shows level of seismic activity in the area is higher than that of Jilin Province of Northeast China. Local volcanic observation shows a symptom of magmatic unrest in period between 2002 and 2006. Regional seismic data have been used to analyze seismic activity of the area. The seismic activity could be differentiated from other seismic phenomena in the region by the analysis.
Comparison of Effective Medium Schemes For Seismic Velocities in Cracked Anisotropic Rock
NASA Astrophysics Data System (ADS)
Morshed, S.; Chesnokov, E.
2017-12-01
Understanding of elastic properties of reservoir rock is necessary for meaningful interpretation and analysis of seismic measurements. The elastic properties of a rock are controlled by the microstructural properties such as mineralogical composition, pore and crack distribution, texture and pore connectivity. However, seismic scale is much larger than microstructure scale. Understanding of macroscopic properties at relevant seismic scale (e.g. borehole sonic data) comes from effective medium theory (EMT). However, most of the effective medium theories fail at high crack density as the interactions of strain fields of the cracks can't be ignored. We compare major EMT schemes from low to high crack density. While at low crack density all method gives similar results, at high crack density they differ significantly. Then, we focus on generalized singular approximation (GSA) and effective field (EF) method as they allow cracks beyond the limit of dilute concentrations. Additionally, we use grain contact (GC) method to examine the stiffness constants of the rock matrix. We prepare simple models of a multiphase media containing low to high concentrations of isolated pores. Randomly oriented spherical pores and horizontally oriented ellipsoidal (aspect ratio =0.1) pores have been considered. For isolated spherical pores, all the three methods show exactly same or similar results. However, inclusion interactions are different in different directions in case of horizontal ellipsoidal pores and individual stiffness constants differ greatly from one method to another at different crack density. Stiffness constants remain consistent in GSA method whereas some components become unusual in EF method at a higher crack density (>0.15). Finally, we applied GSA method to interpret ultrasonic velocities of core samples. Mineralogical composition from X-ray diffraction (XRD) data and lab measured porosity data have been utilized. Both compressional and shear wave velocities from GSA method show good fit with the lab measured velocities.
NASA Astrophysics Data System (ADS)
Chen, Xin; Chen, Wenchao; Wang, Xiaokai; Wang, Wei
2017-10-01
Low-frequency oscillatory ground-roll is regarded as one of the main regular interference waves, which obscures primary reflections in land seismic data. Suppressing the ground-roll can reasonably improve the signal-to-noise ratio of seismic data. Conventional suppression methods, such as high-pass and various f-k filtering, usually cause waveform distortions and loss of body wave information because of their simple cut-off operation. In this study, a sparsity-optimized separation of body waves and ground-roll, which is based on morphological component analysis theory, is realized by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors. Our separation model is grounded on the fact that the input seismic data are composed of low-oscillatory body waves and high-oscillatory ground-roll. Two different waveform dictionaries using a low Q-factor and a high Q-factor, respectively, are confirmed as able to sparsely represent each component based on their diverse morphologies. Thus, seismic data including body waves and ground-roll can be nonlinearly decomposed into low-oscillatory and high-oscillatory components. This is a new noise attenuation approach according to the oscillatory behaviour of the signal rather than the scale or frequency. We illustrate the method using both synthetic and field shot data. Compared with results from conventional high-pass and f-k filtering, the results of the proposed method prove this method to be effective and advantageous in preserving the waveform and bandwidth of reflections.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1986-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations, and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high seismic-velocity surfaces, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits,are ideally suited for applying seismic-refraction methods. These methods allow the economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies.This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of this technique in hydrologic investigations and describes the planning, equipment, field procedures, and intrepretation techniques needed for this type of study.Examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1988-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high-seismic-velocity surface, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits, are ideally suited for seismic-refraction methods. These methods allow economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies. This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of these techniques in hydrologic investigations and describes the planning, equipment, field procedures, and interpretation techniques needed for this type of study. Further-more, examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
NASA Astrophysics Data System (ADS)
Nakahara, H.
2003-12-01
The 2003 Miyagi-Oki earthquake (M 7.0) took place on May 26, 2003 in the subducting Pacific plate beneath northeastern Japan. The focal depth is around 70km. The focal mechanism is reverse type on a fault plane dipping to the west with a high angle. There was no fatality, fortunately. However, this earthquake caused more than 100 injures, 2000 collapsed houses, and so on. To the south of this focal area by about 50km, an interplate earthquake of M7.5, the Miyagi-Ken-Oki earthquake, is expected to occur in the near future. So the relation between this earthquake and the expected Miyagi-Ken-Oki earthquake attracts public attention. Seismic-energy distribution on earthquake fault planes estimated by envelope inversion analyses can contribute to better understanding of the earthquake source process. For moderate to large earthquakes, seismic energy in frequencies higher than 1 Hz is sometimes much larger than a level expected from the omega-squared model with source parameters estimated by lower-frequency analyses. Therefore, an accurate estimation of seismic energy in such high frequencies has significant importance on estimation of dynamic source parameters such as the seismic energy or the apparent stress. In this study, we execute an envelope inversion analysis based on the method by Nakahara et al. (1998) and clarify the spatial distribution of high-frequency seismic energy radiation on the fault plane of this earthquake. We use three-component sum of mean squared velocity seismograms multiplied by a density of earth medium, which is called envelopes here, for the envelope inversion analysis. Four frequency bands of 1-2, 2-4, 4-8, and 8-16 Hz are adopted. We use envelopes in the time window from the onset of S waves to the lapse time of 51.2 sec. Green functions of envelopes representing the energy propagation process through a scattering medium are calculated based on the radiative transfer theory, which are characterized by parameters of scattering attenuation and intrinsic absorption. We use the values obtained for the northeastern Japan (Sakurai, 1995). We assume the fault plane as follows: strike=193,a, dip=69,a, rake=87,a, length=30km, width=25km with referrence to a waveform inversion analysis in low-frequencies (e.g. Yagi, 2003). We divide this fault plane into 25 subfaults, each of which is a 5km x 5km square. Rupture velocity is assumed to be constant. Seismic energy is radiated from a point source as soon as the rupture front passes the center of each subfault. Time function of energy radiation is assumed as a box-car function. The amount of seismic energy from all the subfaults and site amplification factors for all the stations are estimated by the envelope inversion method. Rupture velocity and the duration time of a box-car function should be estimated by a grid search. Theoretical envelopes calculated with best-fit parameters generally fit to observed ones. The rupture velocity and duration time were estimated as 3.8 km/s and 1.6 sec, respectively. The high-frequency seismic energy was found to be radiated mainly from two spots on the fault plane: The first one is around the initial rupture point and the second is the northern part of the fault plane. These two spots correspond to observed two peaks on envelopes. Amount of seismic energy increases with increasing frequency in the 1-16Hz band, which contradicts an expectation from the omega-squared model. Therefore, stronger radiation of higher-frequency seismic energy is a prominent character of this earthquake. Acknowledgements: We used strong-motion seismograms recorded by the K-NET and KiK-net of NIED, JAPAN.
NASA Astrophysics Data System (ADS)
Niri, Mohammad Emami; Lumley, David E.
2017-10-01
Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.
NASA Astrophysics Data System (ADS)
Wong-Ortega, V.; Castro, R. R.; Gonzalez-Huizar, H.; Velasco, A. A.
2013-05-01
We analyze possible variations of seismicity in the northern Baja California due to the passage of seismic waves from the 2011, M9.0, Tohoku-Oki, Japan earthquake. The northwestern area of Baja California is characterized by a mountain range composed of crystalline rocks. These Peninsular Ranges of Baja California exhibits high microseismic activity and moderate size earthquakes. In the eastern region of Baja California shearing between the Pacific and the North American plates takes place and the Imperial and Cerro-Prieto faults generate most of the seismicity. The seismicity in these regions is monitored by the seismic network RESNOM operated by the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE). This network consists of 13 three-component seismic stations. We use the seismic catalog of RESNOM to search for changes in local seismic rates occurred after the passing of surface waves generated by the Tohoku-Oki, Japan earthquake. When we compare one month of seismicity before and after the M9.0 earthquake, the preliminary analysis shows absence of triggered seismicity in the northern Peninsular Ranges and an increase of seismicity south of the Mexicali valley where the Imperial fault jumps southwest and the Cerro Prieto fault continues.
Seismic imaging: From classical to adjoint tomography
NASA Astrophysics Data System (ADS)
Liu, Q.; Gu, Y. J.
2012-09-01
Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.
NASA Astrophysics Data System (ADS)
Denli, H.; Huang, L.
2008-12-01
Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.
New approach to detect seismic surface waves in 1Hz-sampled GPS time series
Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.
2011-01-01
Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563
Radtke, Robert P; Stokes, Robert H; Glowka, David A
2014-12-02
A method for operating an impulsive type seismic energy source in a firing sequence having at least two actuations for each seismic impulse to be generated by the source. The actuations have a time delay between them related to a selected energy frequency peak of the source output. One example of the method is used for generating seismic signals in a wellbore and includes discharging electric current through a spark gap disposed in the wellbore in at least one firing sequence. The sequence includes at least two actuations of the spark gap separated by an amount of time selected to cause acoustic energy resulting from the actuations to have peak amplitude at a selected frequency.
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.; Matsushima, J.
2016-06-01
In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.
NASA Astrophysics Data System (ADS)
Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus
2018-03-01
The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.
Using Seismic Signals to Forecast Volcanic Processes
NASA Astrophysics Data System (ADS)
Salvage, R.; Neuberg, J. W.
2012-04-01
Understanding seismic signals generated during volcanic unrest have the ability to allow scientists to more accurately predict and understand active volcanoes since they are intrinsically linked to rock failure at depth (Voight, 1988). In particular, low frequency long period signals (LP events) have been related to the movement of fluid and the brittle failure of magma at depth due to high strain rates (Hammer and Neuberg, 2009). This fundamentally relates to surface processes. However, there is currently no physical quantitative model for determining the likelihood of an eruption following precursory seismic signals, or the timing or type of eruption that will ensue (Benson et al., 2010). Since the beginning of its current eruptive phase, accelerating LP swarms (< 10 events per hour) have been a common feature at Soufriere Hills volcano, Montserrat prior to surface expressions such as dome collapse or eruptions (Miller et al., 1998). The dynamical behaviour of such swarms can be related to accelerated magma ascent rates since the seismicity is thought to be a consequence of magma deformation as it rises to the surface. In particular, acceleration rates can be successfully used in collaboration with the inverse material failure law; a linear relationship against time (Voight, 1988); in the accurate prediction of volcanic eruption timings. Currently, this has only been investigated for retrospective events (Hammer and Neuberg, 2009). The identification of LP swarms on Montserrat and analysis of their dynamical characteristics allows a better understanding of the nature of the seismic signals themselves, as well as their relationship to surface processes such as magma extrusion rates. Acceleration and deceleration rates of seismic swarms provide insights into the plumbing system of the volcano at depth. The application of the material failure law to multiple LP swarms of data allows a critical evaluation of the accuracy of the method which further refines current understanding of the relationship between seismic signals and volcanic eruptions. It is hoped that such analysis will assist the development of real time forecasting models.
NASA Astrophysics Data System (ADS)
Schiltz, Kelsey Kristine
Steam-assisted gravity drainage (SAGD) is an in situ heavy oil recovery method involving the injection of steam in horizontal wells. Time-lapse seismic analysis over a SAGD project in the Athabasca oil sands deposit of Alberta reveals that the SAGD steam chamber has not developed uniformly. Core data confirm the presence of low permeability shale bodies within the reservoir. These shales can act as barriers and baffles to steam and limit production by prohibiting steam from accessing the full extent of the reservoir. Seismic data can be used to identify these shale breaks prior to siting new SAGD well pairs in order to optimize field development. To identify shale breaks in the study area, three types of seismic inversion and a probabilistic neural network prediction were performed. The predictive value of each result was evaluated by comparing the position of interpreted shales with the boundaries of the steam chamber determined through time-lapse analysis. The P-impedance result from post-stack inversion did not contain enough detail to be able to predict the vertical boundaries of the steam chamber but did show some predictive value in a spatial sense. P-impedance from pre-stack inversion exhibited some meaningful correlations with the steam chamber but was misleading in many crucial areas, particularly the lower reservoir. Density estimated through the application of a probabilistic neural network (PNN) trained using both PP and PS attributes identified shales most accurately. The interpreted shales from this result exhibit a strong relationship with the boundaries of the steam chamber, leading to the conclusion that the PNN method can be used to make predictions about steam chamber growth. In this study, reservoir characterization incorporating multicomponent seismic data demonstrated a high predictive value and could be useful in evaluating future well placement.
Integrated geological-geophysical models of unstable slopes in seismogenic areas in NW and SE Europe
NASA Astrophysics Data System (ADS)
Mreyen, Anne-Sophie; Micu, Mihai; Onaca, Alexandru; Demoulin, Alain; Havenith, Hans-Balder
2017-04-01
We will present a series of new integrated 3D models of landslide sites that were investigated in distinctive seismotectonic and climatic contexts: (1) along the Hockai Fault Zone in Belgium, with the 1692 Verviers Earthquake (M 6 - 6.5) as most prominent earthquake that occurred in that fault zone and (2) in the seismic region of Vrancea, Romania, where four earthquakes with Mw > 7.4 have been recorded during the last two centuries. Both sites present deep-seated failures located in more or less seismically active areas. In such areas, slope stability analyses have to take into account the possible contributions to ground failure. Our investigation methods had to be adapted to capture the deep structure as well as the physico-mechanical characteristics that influence the dynamic behaviour of the landslide body. Field surveys included electrical resistivity tomography profiles, seismic refraction profiles (analysed in terms of both seismic P-wave tomography and surface waves), ambient noise measurements to determine the soil resonance frequencies through H/V analysis, complemented by geological and geomorphic mapping. The H/V method, in particular, is more and more used for landslide investigations or sites marked by topographic relief (in addition to the more classical applications on flat sites). Results of data interpretation were compiled in 3D geological-geophysical models supported by high resolution remote sensing data of the ground surface. Data and results were not only analysed in parallel or successively; to ensure full integration of all inputs-outputs, some data fusion and geostatistical techniques were applied to establish closer links between them. Inside the 3D models, material boundaries were defined in terms of surfaces and volumes. Those models were used as inputs for 2D dynamic numerical simulations completed with the UDEC (Itasca) software. For some sites, a full back-analysis was carried out to assess the possibility of a seismic triggering of the landslides.
Response in thermal neutrons intensity on the activation of seismic processes
NASA Astrophysics Data System (ADS)
Antonova, Valentina; Chubenko, Alexandr; Kryukov, Sergey; Lutsenko, Vadim
2017-04-01
Results of study of thermal and high-energy neutrons intensity during the activation of seismic activity are presented. Installations are located close to the fault of the earth's crust at the high-altitude station of cosmic rays (3340 m above sea level, 20 km from Almaty) in the mountains of Northern Tien-Shan. High correlation and similarity of responses to changes of space and geophysical conditions in the absence of seismic activity are obtained between data of thermal neutron detectors and data of the standard neutron monitor, recording the intensity of high-energy particles. These results confirm the genetic connection of thermal neutrons at the Earth's surface with high-energy neutrons of the galactic origin and suggest same sources of disturbances of their flux. However, observations and analysis of experimental data during the activation of seismic activity showed the frequent breakdown of the correlation between the intensity of thermal and high-energy neutrons and the absence of similarity between variations during these periods. We suppose that the cause of this phenomenon is the additional thermal neutron flux of the lithospheric origin, which appears under these conditions. Method of separating of thermal neutron intensity variations of the lithospheric origin from neutrons variations generated in the atmosphere is proposed. We used this method for analysis of variations of thermal neutrons intensity during earthquakes (with intensity ≥ 3b) in the vicinity of Almaty which took place in 2006-2015. The increase of thermal neutrons flux of the lithospheric origin during of seismic processes activation was observed for 60% of events. However, before the earthquake the increase of thermal neutron flux is only observed for 25-30% of events. It is shown that the amplitude of the additional thermal neutron flux from the Earth's crust is equal to 5-7% of the background level.
A new scheme for velocity analysis and imaging of diffractions
NASA Astrophysics Data System (ADS)
Lin, Peng; Peng, Suping; Zhao, Jingtao; Cui, Xiaoqin; Du, Wenfeng
2018-06-01
Seismic diffractions are the responses of small-scale inhomogeneities or discontinuous geological features, which play a vital role in the exploitation and development of oil and gas reservoirs. However, diffractions are generally ignored and considered as interference noise in conventional data processing. In this paper, a new scheme for velocity analysis and imaging of seismic diffractions is proposed. Two steps compose of this scheme in our application. First, the plane-wave destruction method is used to separate diffractions from specular reflections in the prestack domain. Second, in order to accurately estimate migration velocity of the diffractions, the time-domain dip-angle gathers are derived from a Kirchhoff-based angle prestack time migration using separated diffractions. Diffraction events appear flat in the dip-angle gathers when imaged above the diffraction point with selected accurate migration velocity for diffractions. The selected migration velocity helps to produce the desired prestack imaging of diffractions. Synthetic and field examples are applied to test the validity of the new scheme. The diffraction imaging results indicate that the proposed scheme for velocity analysis and imaging of diffractions can provide more detailed information about small-scale geologic features for seismic interpretation.
Moment Tensor Analysis of Shallow Sources
NASA Astrophysics Data System (ADS)
Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.; Yoo, S. H.
2015-12-01
A potential issue for moment tensor inversion of shallow seismic sources is that some moment tensor components have vanishing amplitudes at the free surface, which can result in bias in the moment tensor solution. The effects of the free-surface on the stability of the moment tensor method becomes important as we continue to investigate and improve the capabilities of regional full moment tensor inversion for source-type identification and discrimination. It is important to understand these free surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have shallow seismicity such as volcanoes and geothermal systems. In this study, we apply the moment tensor based discrimination method to the HUMMING ALBATROSS quarry blasts. These shallow chemical explosions at approximately 10 m depth and recorded up to several kilometers distance represent rather severe source-station geometry in terms of vanishing traction issues. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first motion method enables the unique discrimination of these events. Recovering the correct yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique.
NASA Astrophysics Data System (ADS)
Bonnet, José; Fradet, Thibault; Traversa, Paola; Tuleau-Malot, Christine; Reynaud-Bouret, Patricia; Laloe, Thomas; Manchuel, Kevin
2014-05-01
In metropolitan France the deformation rates are slow, implying low to moderate seismic activity. Therefore, earthquakes observed during the instrumental period (since 1962), and associated catalogs, cannot be representative of the seismic cycle for the French metropolitan territory. In such context it is necessary, when performing seismic hazard studies, to consider historical seismic data in order to extend the observation period and to be more representative of the seismogenic behavior of geological structures. The French macroseismic database SisFrance is jointly developed by EDF (Electricité de France), BRGM (Bureau de Recherche Géologique et Minière) and IRSN (Institut de Radioprotection et Sureté Nucléaire). It contains more than 6,000 events inventoried between 217 BC and 2007 and more than 100,000 macroseismic observations. SisFrance is the reference macroseismic database for metropolitan France. The aim of this study is to determine, over the whole catalog, the completeness periods for different epicentral intensity (Iepc) classes≥IV. Two methods have been used: 1) the method of Albarello et al. [2001], which has been adapted to best suit the French catalog, and 2) a mathematical method based on change points estimation, proposed by Muggeo et al. [2003], which has been adapted to the analysis of seismic datasets. After a brief theoretical description, both methods are tested and validated using synthetic catalogs, before being applied to the French catalog. The results show that completeness periods estimated using these two methods are coherent with each other for events with Iepc ≥IV (1876 using Albarello et al. [2001] method and 1872 using Muggeo et al. [2003] method) and events with Iepc ≥V (1852 using Albarello et al. [2001] method and 1855 using Muggeo et al. [2003] method). Larger differences in estimated completeness period appear when considering events with Iepc ≥VI (around 30 years difference) and events with Iepc ≥VII (around 50 years difference). These could be explained (1) by the differences in the way each method approaches the data; Muggeo et al. [2003] method estimates all change points within data series, whereas the method of Albarello et al. [2001] focus on the last one, and (2) by a more limited number of data for these epicentral intensity classes (2056 events with Iepc ≥IV and 1252 events with Iepc ≥V vs. 486 events with Iepc ≥VI and 199 events with Iepc ≥VII). Results obtained for epicentral intensity classes greater than VIII are considered not reliable due to the short number of existing data (around 30 events). The completeness periods determined in this study are discussed in the light of their contemporary historical context, and in particular of the evolution of the information available from historical archives since the 17th century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin
2015-02-01
Seismic isolation (SI) has the potential to drastically reduce seismic response of structures, systems, or components (SSCs) and therefore the risk associated with large seismic events (large seismic event could be defined as the design basis earthquake (DBE) and/or the beyond design basis earthquake (BDBE) depending on the site location). This would correspond to a potential increase in nuclear safety by minimizing the structural response and thus minimizing the risk of material release during large seismic events that have uncertainty associated with their magnitude and frequency. The national consensus standard America Society of Civil Engineers (ASCE) Standard 4, Seismic Analysismore » of Safety Related Nuclear Structures recently incorporated language and commentary for seismically isolating a large light water reactor or similar large nuclear structure. Some potential benefits of SI are: 1) substantially decoupling the SSC from the earthquake hazard thus decreasing risk of material release during large earthquakes, 2) cost savings for the facility and/or equipment, and 3) applicability to both nuclear (current and next generation) and high hazard non-nuclear facilities. Issue: To date no one has evaluated how the benefit of seismic risk reduction reduces cost to construct a nuclear facility. Objective: Use seismic probabilistic risk assessment (SPRA) to evaluate the reduction in seismic risk and estimate potential cost savings of seismic isolation of a generic nuclear facility. This project would leverage ongoing Idaho National Laboratory (INL) activities that are developing advanced (SPRA) methods using Nonlinear Soil-Structure Interaction (NLSSI) analysis. Technical Approach: The proposed study is intended to obtain an estimate on the reduction in seismic risk and construction cost that might be achieved by seismically isolating a nuclear facility. The nuclear facility is a representative pressurized water reactor building nuclear power plant (NPP) structure. Figure 1: Project activities The study will consider a representative NPP reinforced concrete reactor building and representative plant safety system. This study will leverage existing research and development (R&D) activities at INL. Figure 1 shows the proposed study steps with the steps in blue representing activities already funded at INL and the steps in purple the activities that would be funded under this proposal. The following results will be documented: 1) Comparison of seismic risk for the non-seismically isolated (non-SI) and seismically isolated (SI) NPP, and 2) an estimate of construction cost savings when implementing SI at the site of the generic NPP.« less
NASA Astrophysics Data System (ADS)
Yan, Peng; Li, Zhiwei; Li, Fei; Yang, Yuande; Hao, Weifeng; Bao, Feng
2018-03-01
We report on a successful application of the horizontal-to-vertical spectral ratio (H / V) method, generally used to investigate the subsurface velocity structures of the shallow crust, to estimate the Antarctic ice sheet thickness for the first time. Using three-component, five-day long, seismic ambient noise records gathered from more than 60 temporary seismic stations located on the Antarctic ice sheet, the ice thickness measured at each station has comparable accuracy to the Bedmap2 database. Preliminary analysis revealed that 60 out of 65 seismic stations on the ice sheet obtained clear peak frequencies (f0) related to the ice sheet thickness in the H / V spectrum. Thus, assuming that the isotropic ice layer lies atop a high velocity half-space bedrock, the ice sheet thickness can be calculated by a simple approximation formula. About half of the calculated ice sheet thicknesses were consistent with the Bedmap2 ice thickness values. To further improve the reliability of ice thickness measurements, two-type models were built to fit the observed H / V spectrum through non-linear inversion. The two-type models represent the isotropic structures of single- and two-layer ice sheets, and the latter depicts the non-uniform, layered characteristics of the ice sheet widely distributed in Antarctica. The inversion results suggest that the ice thicknesses derived from the two-layer ice models were in good concurrence with the Bedmap2 ice thickness database, and that ice thickness differences between the two were within 300 m at almost all stations. Our results support previous finding that the Antarctic ice sheet is stratified. Extensive data processing indicates that the time length of seismic ambient noise records can be shortened to two hours for reliable ice sheet thickness estimation using the H / V method. This study extends the application fields of the H / V method and provides an effective and independent way to measure ice sheet thickness in Antarctica.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
NASA Astrophysics Data System (ADS)
Mavrouli, Olga; Rana, Sohel; van Westen, Cees; Zhang, Jianqiang
2017-04-01
After the devastating 2015 Gorkha earthquake in Nepal, reconstruction activities have been delayed considerably, due to many reasons, of a political, organizational and technical nature. Due to the widespread occurrence of co-seismic landslides, and the expectation that these may be aggravated or re-activated in future years during the intense monsoon periods, there is a need to evaluate for thousands of sites whether these are suited for reconstruction. In this evaluation multi-hazards, such as rockfall, landslides, debris flow, and flashfloods should be taken into account. The application of indirect knowledge-based, data-driven or physically-based approaches is not suitable due to several reasons. Physically-based models generally require a large number of parameters, for which data is not available. Data-driven, statistical methods, depend on historical information, which is less useful after the occurrence of a major event, such as an earthquake. Besides, they would lead to unacceptable levels of generalization, as the analysis is done based on rather general causal factor maps. The same holds for indirect knowledge-driven methods. However, location-specific hazards analysis is required using a simple method that can be used by many people at the local level. In this research, a direct scientific method was developed where local level technical people can easily and quickly assess the post-earthquake multi hazards following a decision tree approach, using an app on a smartphone or tablet. The methods assumes that a central organization, such as the Department of Soil Conservation and Watershed Management, generates spatial information beforehand that is used in the direct assessment at a certain location. Pre-earthquake, co-seismic and post-seismic landslide inventories are generated through the interpretation of Google Earth multi-temporal images, using anaglyph methods. Spatial data, such as Digital Elevation Models, land cover maps, and geological maps are used in a GIS to generate Terrain Units in a semi-automated manner, which are further edited using stereo-image interpretation. Source areas for rockfall and debris flows are outlined from the factor maps, and historical inventory, and regional scale empirical runout models are used to define areas that might be affected. This data is then used in the field in an application that guides the user through the decision tree by asking a number of questions, which can be answered by using the existing data, and by direct field observations. The method was applied in a part of Rasuwa district, which was seriously affected by co-seismic and post-seismic mass movements, leading to the evacuation of a number of village, and temporary closure of a number of hydropower construction projects.
Seismic Hazard Assessment at Esfaraen‒Bojnurd Railway, North‒East of Iran
NASA Astrophysics Data System (ADS)
Haerifard, S.; Jarahi, H.; Pourkermani, M.; Almasian, M.
2018-01-01
The objective of this study is to evaluate the seismic hazard at the Esfarayen-Bojnurd railway using the probabilistic seismic hazard assessment (PSHA) method. This method was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. Attenuation equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 1.2 × 1.2 km covering the study area, ground acceleration for every node was calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to return periods of 74, 475 and 2475 years.
Earthquake Hazard Analysis Methods: A Review
NASA Astrophysics Data System (ADS)
Sari, A. M.; Fakhrurrozi, A.
2018-02-01
One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.
Centrality in earthquake multiplex networks
NASA Astrophysics Data System (ADS)
Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.
2018-06-01
Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.
Fault slip rates in the modern new madrid seismic zone
Mueller; Champion; Guccione; Kelson
1999-11-05
Structural and geomorphic analysis of late Holocene sediments in the Lake County region of the New Madrid seismic zone indicates that they are deformed by fault-related folding above the blind Reelfoot thrust fault. The widths of narrow kink bands exposed in trenches were used to model the Reelfoot scarp as a forelimb on a fault-bend fold; this, coupled with the age of folded sediment, yields a slip rate on the blind thrust of 6.1 +/- 0.7 mm/year for the past 2300 +/- 100 years. An alternative method used structural relief across the scarp and the estimated dip of the underlying blind thrust to calculate a slip rate of 4.8 +/- 0.2 mm/year. Geometric relations suggest that the right lateral slip rate on the New Madrid seismic zone is 1.8 to 2.0 mm/year.
Tempo-spatial analysis of Fennoscandian intraplate seismicity
NASA Astrophysics Data System (ADS)
Roberts, Roland; Lund, Björn
2017-04-01
Coupled spatial-temporal patterns of the occurrence of earthquakes in Fennoscandia are analysed using non-parametric methods. The occurrence of larger events is unambiguously and very strongly temporally clustered, with major implications for the assessment of seismic hazard in areas such as Fennoscandia. In addition, there is a clear pattern of geographical migration of activity. Data from the Swedish National Seismic Network and a collated international catalogue are analysed. Results show consistent patterns on different spatial and temporal scales. We are currently investigating these patterns in order to assess the statistical significance of the tempo-spatial patterns, and to what extent these may be consistent with stress transfer mechanism such as coulomb stress and pore fluid migration. Indications are that some further mechanism is necessary in order to explain the data, perhaps related to post-glacial uplift, which is up to 1cm/year.
Fractal and chaotic laws on seismic dissipated energy in an energy system of engineering structures
NASA Astrophysics Data System (ADS)
Cui, Yu-Hong; Nie, Yong-An; Yan, Zong-Da; Wu, Guo-You
1998-09-01
Fractal and chaotic laws of engineering structures are discussed in this paper, it means that the intrinsic essences and laws on dynamic systems which are made from seismic dissipated energy intensity E d and intensity of seismic dissipated energy moment I e are analyzed. Based on the intrinsic characters of chaotic and fractal dynamic system of E d and I e, three kinds of approximate dynamic models are rebuilt one by one: index autoregressive model, threshold autoregressive model and local-approximate autoregressive model. The innate laws, essences and systematic error of evolutional behavior I e are explained over all, the short-term behavior predictability and long-term behavior probability of which are analyzed in the end. That may be valuable for earthquake-resistant theory and analysis method in practical engineering structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utama, Muhammad Reza July, E-mail: muhammad.reza@bmkg.go.id; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian
The precise hypocenter was determined location using double difference method around subduction zone in Moluccas area eastern part of Indonesia. The initial hypocenter location from MCGA data catalogue of 1,945 earthquake events. Basically the principle of double-difference algorithm assumes if the distance between two earthquake hypocenter distribution is very small compared to the distance between the station to the earthquake source, the ray path can be considered close to both earthquakes. The results show the initial earthquakes with a certain depth (fix depth 10 km) relocated and can be interpreted more reliable in term of seismicity and geological setting. Themore » relocation of the intra slab earthquakes beneath Banda Arc are also clearly observed down to depth of about 400 km. The precise relocated hypocenter will give invaluable seismicity information for other seismological and tectonic studies especially for seismic hazard analysis in this region.« less
NASA Astrophysics Data System (ADS)
Seong-hwa, Y.; Wee, S.; Kim, J.
2016-12-01
Observed ground motions are composed of 3 main factors such as seismic source, seismic wave attenuation and site amplification. Among them, site amplification is also important factor and should be considered to estimate soil-structure dynamic interaction with more reliability. Though various estimation methods are suggested, this study used the method by Castro et. al.(1997) for estimating site amplification. This method has been extended to background noise, coda waves and S waves recently for estimating site amplification. This study applied the Castro et. al.(1997)'s method to 3 different seismic waves, that is, S-wave Energy, Background Noise, and Coda waves. This study analysed much more than about 200 ground motions (acceleration type) from the East Japan earthquake (March 11th, 2011) Series of seismic stations at Jeju Island (JJU, SGP, HALB, SSP and GOS; Fig. 1), in Korea. The results showed that most of the seismic stations gave similar results among three types of seismic energies. Each station showed its own characteristics of site amplification property in low, high and specific resonance frequency ranges. Comparison of this study to other studies can give us much information about dynamic amplification of domestic sites characteristics and site classification.
Seismic Data Analysis throught Multi-Class Classification.
NASA Astrophysics Data System (ADS)
Anderson, P.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted twenty experiments of varying time and frequency bands on 5000seismic signals with the intent of finding a method to classify signals as either an explosion or anearthquake in an automated fashion. We used a multi-class approach by clustering of the data throughvarious techniques. Dimensional reduction was examined through the use of wavelet transforms withthe use of the coiflet mother wavelet and various coefficients to explore possible computational time vsaccuracy dependencies. Three and four classes were generated from the clustering techniques andexamined with the three class approach producing the most accurate and realistic results.
NASA Astrophysics Data System (ADS)
Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio Alex; Galluzzo, Danilo; Contrafatto, Danilo; Rapisarda, Salvatore
2015-04-01
A continuous monitoring of seismic activity is a fundamental task to detect the most common signals possibly related with volcanic activity, such as volcano-tectonic earthquakes, long-period events, and volcanic tremor. A reliable prediction of the ray-path propagated back from the recording site to the source is strongly limited by the poor knowledge of the local shallow velocity structure. Usually in volcanic environments the shallowest few hundreds meters of rock are characterized by strongly variable mechanical properties. Therefore the propagation of seismic signals through these shallow layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Driven by these motivations, between May and October 2014 we deployed a seismic array in the area called "Pozzo Pitarrone", where two seismic stations of the local monitoring network are installed, one at surface and one borehole at a depth of about 130 meters. The Pitarrone borehole is located in the middle northeastern flank along one of the main intrusion zones of Etna volcano, the so called NE-rift. With the 3D array we recorded seismic signals coming from the summit craters, and also from the seismogenetic fault called Pernicana Fault, which is located nearby. We used array data to analyse the dispersion characteristics of ambient noise vibrations and we derived one-dimensional (1D) shallow shear-velocity profiles through the inversion of dispersion curves measured by autocorrelation methods (SPAC). We observed a one-dimensional variation of shear-velocity between 430 m/s and 700 m/s to a depth of investigation of about 130 m. An abrupt velocity variation was recorded at a depth of about 60 m, probably corresponding to the transition between two different layers. Our preliminary results suggest a good correlation between the velocity model deducted with the stratigraphic section on Etna. The analysis of the entire data set will improve our knowledge about the (i) structure of the top layer and its relationship with geology, (ii) analysis of the signal to noise ratio (SNR) of volcanic signals as a function of frequency, (iii) study of seismic ray-path deformation caused by the interaction of the seismic waves with the free surface, (iv) evaluation of the attenuation of the seismic signals correlated with the volcanic activity. Moreover the knowledge of a shallow velocity model could improve the study of the source mechanism of low frequency events (VLP, LP and volcanic tremor), and give a new contribution to the seismic monitoring of Etna volcano through the detection and location of seismic sources by using 3D array techniques.
Analysis of the Seismicity Preceding Large Earthquakes
NASA Astrophysics Data System (ADS)
Stallone, A.; Marzocchi, W.
2016-12-01
The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
NASA Astrophysics Data System (ADS)
Milojević, Slavka; Stojanovic, Vojislav
2017-04-01
Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface layers and therefore more accurately computed static corrections.
GISMO: A MATLAB toolbox for seismic research, monitoring, & education
NASA Astrophysics Data System (ADS)
Thompson, G.; Reyes, C. G.; Kempler, L. A.
2017-12-01
GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.
Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.
2015-01-01
Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.
DSOD Procedures for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Howard, J. K.; Fraser, W. A.
2005-12-01
DSOD, which has jurisdiction over more than 1200 dams in California, routinely evaluates their dynamic stability using seismic shaking input ranging from simple pseudostatic coefficients to spectrally matched earthquake time histories. Our seismic hazard assessments assume maximum earthquake scenarios of nearest active and conditionally active seismic sources. Multiple earthquake scenarios may be evaluated depending on sensitivity of the design analysis (e.g., to certain spectral amplitudes, duration of shaking). Active sources are defined as those with evidence of movement within the last 35,000 years. Conditionally active sources are those with reasonable expectation of activity, which are treated as active until demonstrated otherwise. The Division's Geology Branch develops seismic hazard estimates using spectral attenuation formulas applicable to California. The formulas were selected, in part, to achieve a site response model similar to the 2000 IBC's for rock, soft rock, and stiff soil sites. The level of dynamic loading used in the stability analysis (50th, 67th, or 84th percentile ground shaking estimates) is determined using a matrix that considers consequence of dam failure and fault slip rate. We account for near-source directivity amplification along such faults by adjusting target response spectra and developing appropriate design earthquakes for analysis of structures sensitive to long-period motion. Based on in-house studies, the orientation of the dam analysis section relative to the fault-normal direction is considered for strike-slip earthquakes, but directivity amplification is assumed in any orientation for dip-slip earthquakes. We do not have probabilistic standards, but we evaluate the probability of our ground shaking estimates using hazard curves constructed from the USGS Interactive De-Aggregation website. Typically, return periods for our design loads exceed 1000 years. Excessive return periods may warrant a lower design load. Minimum shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.
Earthquake supersite project in the Messina Straits area (EQUAMES)
NASA Astrophysics Data System (ADS)
Mattia, Mario; Chiarabba, Claudio; Dell'Acqua, Fabio; Faccenna, Claudio; Lanari, Riccardo; Matteuzzi, Francesco; Neri, Giancarlo; Patanè, Domenico; Polonia, Alina; Prati, Claudio; Tinti, Stefano; Zerbini, Susanna
2015-04-01
A new permanent supersite is going to be proposed to the GEO GSNL (Geohazard Supersites and National Laboratories) for the Messina Straits area (Italy). The justification for this new supersite can be found in its geological and geophysical features and in the exposure to strong earthquakes, also in the recent past (1908). The Messina Supersite infrastructure (EQUAMES: EarthQUAkes in the MEssina Straits) will host, and contribute to the collection of, large amounts of data, basic for the analysis of seismic hazard/risk in this high seismic risk area, including risk from earthquake-related processes such as submarine mass failures and tsunamis. In EQUAMES, data of different types will coexist with models and methods useful for their analysis/interpretation and with first-level products of analysis that can be of interest for different kinds of users. EQUAMES will help all the interested scientific and non-scientific subjects to find and use data and to increase inter-institutional cooperation by addressing the following main topics in the Messina Straits area: • investigation of the geological and physical processes leading to the earthquake preparation and generation; • analysis of seismic shaking at ground (expected and observed); • combination of seismic hazard with vulnerability and exposure data for risk estimates; • analysis of tsunami generation, propagation and coastal inundation deriving from earthquake occurrence also through landslides due to instability conditions of subaerial and submarine slopes; • overall risk associated to earthquake activity in the Supersite area including the different types of cascade effects Many Italian and international Institutions have shown an effective interest in this project where a large variety of geophysical and geological in-situ data will be collected and where the INGV has the leading role with its large infrastructure of seismic, GPS and geochemical permanent stations. The groups supporting EQUAMES compile different expertises which will allow most up-to-date analysis and interpretation of the data to be acquired. Finally, the availability of SAR data from different satellites (ERS, Cosmo SkyMed, Sentinel) can be the key for important improvements in the knowledge of the geodynamics of this area of the Mediterranean Sea.
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
Exploring the Geological Structure of the Continental Crust.
ERIC Educational Resources Information Center
Oliver, Jack
1983-01-01
Discusses exploration and mapping of the continental basement using the seismic reflection profiling technique as well as drilling methods. Also discusses computer analysis of gravity and magnetic fields. Points out the need for data that can be correlated to surface information. (JM)
Payne, Suzette J.; Coppersmith, Kevin J.; Coppersmith, Ryan; ...
2017-08-23
A key decision for nuclear facilities is evaluating the need for an update of an existing seismic hazard analysis in light of new data and information that has become available since the time that the analysis was completed. We introduce the newly developed risk-informed Seismic Hazard Periodic Review Methodology (referred to as the SHPRM) and present how a Senior Seismic Hazard Analysis Committee (SSHAC) Level 1 probabilistic seismic hazard analysis (PSHA) was performed in an implementation of this new methodology. The SHPRM offers a defensible and documented approach that considers both the changes in seismic hazard and engineering-based risk informationmore » of an existing nuclear facility to assess the need for an update of an existing PSHA. The SHPRM has seven evaluation criteria that are employed at specific analysis, decision, and comparison points which are applied to seismic design categories established for nuclear facilities in United States. The SHPRM is implemented using a SSHAC Level 1 study performed for the Idaho National Laboratory, USA. The implementation focuses on the first six of the seven evaluation criteria of the SHPRM which are all provided from the SSHAC Level 1 PSHA. Finally, to illustrate outcomes of the SHPRM that do not lead to the need for an update and those that do, the example implementations of the SHPRM are performed for nuclear facilities that have target performance goals expressed as the mean annual frequency of unacceptable performance at 1x10 -4, 4x10 -5 and 1x10 -5.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Suzette J.; Coppersmith, Kevin J.; Coppersmith, Ryan
A key decision for nuclear facilities is evaluating the need for an update of an existing seismic hazard analysis in light of new data and information that has become available since the time that the analysis was completed. We introduce the newly developed risk-informed Seismic Hazard Periodic Review Methodology (referred to as the SHPRM) and present how a Senior Seismic Hazard Analysis Committee (SSHAC) Level 1 probabilistic seismic hazard analysis (PSHA) was performed in an implementation of this new methodology. The SHPRM offers a defensible and documented approach that considers both the changes in seismic hazard and engineering-based risk informationmore » of an existing nuclear facility to assess the need for an update of an existing PSHA. The SHPRM has seven evaluation criteria that are employed at specific analysis, decision, and comparison points which are applied to seismic design categories established for nuclear facilities in United States. The SHPRM is implemented using a SSHAC Level 1 study performed for the Idaho National Laboratory, USA. The implementation focuses on the first six of the seven evaluation criteria of the SHPRM which are all provided from the SSHAC Level 1 PSHA. Finally, to illustrate outcomes of the SHPRM that do not lead to the need for an update and those that do, the example implementations of the SHPRM are performed for nuclear facilities that have target performance goals expressed as the mean annual frequency of unacceptable performance at 1x10 -4, 4x10 -5 and 1x10 -5.« less
Convolutional neural network for earthquake detection and location
Perol, Thibaut; Gharbi, Michaël; Denolle, Marine
2018-01-01
The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899
NASA Astrophysics Data System (ADS)
Peterson, C. D.; Behl, R. J.; Nicholson, C.; Lisiecki, L. E.; Sorlien, C. C.
2009-12-01
High-resolution seismic reflection records and well logs from the Santa Barbara Channel suggest that large parts of the Pleistocene succession records climate variability on orbital to sub-orbital scales with remarkable sensitivity, much like the well-studied sediments of the last glacial cycle (ODP Site 893). Spectral analysis of seismic reflection data and gamma ray logs from stratigraphically similar Pleistocene sections finds similar cyclic character and shifts through the section. This correlation suggests that acoustic impedance and physical properties of sediment are linked by basin-scale, likely climatically-driven, oscillations in lithologic composition and fabric during deposition, and that seismic profiling can provide a method for remote identification and correlation of orbital- and sub-orbital-scale sedimentary cyclicity. Where it crops out along the northern shelf of the central Santa Barbara Channel, the early to middle Pleistocene succession (~1.8-1.2 Ma) is a bathyal hemipelagic mudstone with remarkably rhythmic planar bedding, finely laminated fabric, and well-preserved foraminifera, none of which have been significantly altered, or obscured by post-depositional diagenesis or tectonic deformation. Unlike the coarser, turbiditic successions in the central Ventura and Los Angeles basins, this sequence has the potential to record Quaternary global climate change at high resolution. Seismic reflection data (towed chirp) collected on the R/V Melville 2008 Cruise (MV08) penetrate 10's of meters below seafloor into a ~1 km-long sequence of south-dipping seismic reflectors. Sampling parallel to the seafloor permits acquisition of consistent signal amplitude for similar reflectors without spreading loss. Based on established age ranges for this section, sedimentation rates may range from 0.4 to 1.4 meters/kyr, therefore suggesting that the most powerful cycles are orbital- to sub-orbital-scale. Discrete sets of cycles with high power show an abrupt shift to shorter wavelengths midway through the section. Deep in the section, the strongest cycles indicated by spectral analysis are 50 and 16 meters thick, whereas up section, the strongest cycles are 20 and 12 meters thick. Nearby industry boreholes that penetrate a stratigraphically similar, 1500-meter-thick mudstone section, provide logs of natural gamma ray intensity with a higher sample interval (15 cm), allowing resolution and analysis of even higher frequency lithologic cycles. The strongest cycle deep in the section is 100 meters thick, and up section, the strongest cycle is 12 meters thick. This abrupt decrease in dominant cycle thickness midway through both the seismic and gamma ray records perhaps indicates a basin-wide shift in sedimentation. With improved chronostratigraphy based on Sr-isotope ratios and biostratigraphy, and comparison with paleoclimate proxy data, we will test if seismically resolved lithologic oscillations can be reliably interpreted as representing climatically driven Milankovitch cycles. This method may be used to evaluate the age and paleoceanographic potential of sedimentary strata before a coring vessel is deployed.
NASA Astrophysics Data System (ADS)
Li, X.; Gao, M.
2017-12-01
The magnitude of an earthquake is one of its basic parameters and is a measure of its scale. It plays a significant role in seismology and earthquake engineering research, particularly in the calculations of the seismic rate and b value in earthquake prediction and seismic hazard analysis. However, several current types of magnitudes used in seismology research, such as local magnitude (ML), surface wave magnitude (MS), and body-wave magnitude (MB), have a common limitation, which is the magnitude saturation phenomenon. Fortunately, the problem of magnitude saturation was solved by a formula for calculating the seismic moment magnitude (MW) based on the seismic moment, which describes the seismic source strength. Now the moment magnitude is very commonly used in seismology research. However, in China, the earthquake scale is primarily based on local and surface-wave magnitudes. In the present work, we studied the empirical relationships between moment magnitude (MW) and local magnitude (ML) as well as surface wave magnitude (MS) in the Chinese Mainland. The China Earthquake Networks Center (CENC) ML catalog, China Seismograph Network (CSN) MS catalog, ANSS Comprehensive Earthquake Catalog (ComCat), and Global Centroid Moment Tensor (GCMT) are adopted to regress the relationships using the orthogonal regression method. The obtained relationships are as follows: MW=0.64+0.87MS; MW=1.16+0.75ML. Therefore, in China, if the moment magnitude of an earthquake is not reported by any agency in the world, we can use the equations mentioned above for converting ML to MW and MS to MW. These relationships are very important, because they will allow the China earthquake catalogs to be used more effectively for seismic hazard analysis, earthquake prediction, and other seismology research. We also computed the relationships of and (where Mo is the seismic moment) by linear regression using the Global Centroid Moment Tensor. The obtained relationships are as follows: logMo=18.21+1.05ML; logMo=17.04+1.32MS. This formula can be used by seismologists to convert the ML/MS of Chinese mainland events into their seismic moments.
NASA Astrophysics Data System (ADS)
Tada, T.; Cho, I.; Shinozaki, Y.
2005-12-01
We have invented a Two-Radius (TR) circular array method of microtremor exploration, an algorithm that enables to estimate phase velocities of Love waves by analyzing horizontal-component records of microtremors that are obtained with an array of seismic sensors placed around circumferences of two different radii. The data recording may be done either simultaneously around the two circles or in two separate sessions with sensors distributed around each circle. Both Rayleigh and Love waves are present in the horizontal components of microtremors, but in the data processing of our TR method, all information on the Rayleigh waves ends up cancelled out, and information on the Love waves alone are left to be analyzed. Also, unlike the popularly used frequency-wavenumber spectral (F-K) method, our TR method does not resolve individual plane-wave components arriving from different directions and analyze their "vector" phase velocities, but instead directly evaluates their "scalar" phase velocities --- phase velocities that contain no information on the arrival direction of waves --- through a mathematical procedure which involves azimuthal averaging. The latter feature leads us to expect that, with our TR method, it is possible to conduct phase velocity analysis with smaller numbers of sensors, with higher stability, and up to longer-wavelength ranges than with the F-K method. With a view to investigating the capabilities and limitations of our TR method in practical implementation to real data, we have deployed circular seismic arrays of different sizes at a test site in Japan where the underground structure is well documented through geophysical exploration. Ten seismic sensors were placed equidistantly around two circumferences, five around each circle, with varying combinations of radii ranging from several meters to several tens of meters, and simultaneous records of microtremors around circles of two different radii were analyzed with our TR method to produce estimates for the phase velocities of Love waves. The estimates were then checked against "model" phase velocities that are derived from theoretical calculations. We have also conducted a check of the estimated spectral ratios against the "model" spectral ratios, where we mean by "spectral ratio" an intermediary quantity that is calculated from observed records prior to the estimation of the phase velocity in the data analysis procedure of our TR method. In most cases, the estimated phase velocities coincided well with the model phase velocities within a wavelength range extending roughly from 3r to 6r (r: array radius). It was found out that, outside the upper and lower resolution limits of the TR method, the discrepancy between the estimated and model phase velocities, as well as the discrepancy between the estimated and model spectral ratios, were accounted for satisfactorily by theoretical consideration of three factors: the presence of higher surface-wave modes, directional aliasing effects related to the finite number of sensors in the seismic array, and the presence of incoherent noise.
Interpretation of Data from Uphole Refraction Surveys
1980-06-01
Seismic refraction Seismic refraction method Seismic surveys Subsurface exploration ""-. 20, AI0SrRACT -(CmtuamU 00MvaO eL If naaaaamr and Identlfyby...by the presence of subsurface cavities and large cavities are identifiable, the sensitivity of the method is marginal for practical use in cavity...detection. Some cavities large enough to be of engineering signifi- cance (e.g., a tunnel of h-m diameter) may be practically undetectable by this method
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Duputel, Z.; Simons, M.; Jiang, J.; Riel, B. V.; Moore, A. W.; Owen, S. E.
2017-12-01
Mapping subsurface fault slip during the different phases of the seismic cycle provides a probe of the mechanical properties and the state of stress along these faults. We focus on the northern Chile megathrust where first order estimates of interseismic fault locking suggests little to no overlap between regions slipping seismically versus those that are dominantly aseismic. However, published distributions of slip, be they during seismic or aseismic phases, rely on unphysical regularization of the inverse problem, thereby cluttering attempts to quantify the degree of overlap between seismic and aseismic slip. Considering all the implications of aseismic slip on our understanding of the nucleation, propagation and arrest of seismic ruptures, it is of utmost importance to quantify our confidence in the current description of fault coupling. Here, we take advantage of 20 years of InSAR observations and more than a decade of GPS measurements to derive probabilistic maps of inter-seismic coupling, as well as co-seismic and post-seismic slip along the northern Chile subduction megathrust. A wide InSAR velocity map is derived using a novel multi-pixel time series analysis method accounting for orbital errors, atmospheric noise and ground deformation. We use AlTar, a massively parallel Monte Carlo Markov Chain algorithm exploiting the acceleration capabilities of Graphic Processing Units, to derive the probability density functions (PDF) of slip. In northern Chile, we find high probabilities for a complete release of the elastic strain accumulated since the 1877 earthquake by the 2014, Iquique earthquake and for the presence of a large, independent, locked asperity left untapped by recent events, north of the Mejillones peninsula. We evaluate the probability of overlap between the co-, inter- and post-seismic slip and consider the potential occurrence of slow, aseismic slip events along this portion of the subduction zone.
Evaluation of ground motion scaling methods for analysis of structural systems
O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.
2011-01-01
Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.
NASA Astrophysics Data System (ADS)
Shakiba, Sima; Asghari, Omid; Khah, Nasser Keshavarz Faraj
2018-01-01
A combined geostatitical methodology based on Min/Max Auto-correlation Factor (MAF) analysis and Analytical Hierarchy Process (AHP) is presented to generate a suitable Fault Detection Map (FDM) through seismic attributes. Five seismic attributes derived from a 2D time slice obtained from data related to a gas field located in southwest of Iran are used including instantaneous amplitude, similarity, energy, frequency, and Fault Enhancement Filter (FEF). The MAF analysis is implemented to reduce dimension of input variables, and then AHP method is applied on three obtained de-correlated MAF factors as evidential layer. Three Decision Makers (DMs) are used to construct PCMs for determining weights of selected evidential layer. Finally, weights obtained by AHP were multiplied in normalized valued of each alternative (MAF layers) and the concluded weighted layers were integrated in order to prepare final FDM. Results proved that applying algorithm proposed in this study generate a map more acceptable than the each individual attribute and sharpen the non-surface discontinuities as well as enhancing continuity of detected faults.
GIS-based seismic shaking slope vulnerability map of Sicily (Central Mediterranean)
NASA Astrophysics Data System (ADS)
Nigro, Fabrizio; Arisco, Giuseppe; Perricone, Marcella; Renda, Pietro; Favara, Rocco
2010-05-01
Earthquakes often represent very dangerouses natural events in terms of human life and economic losses and their damage effects are amplified by the synchronous occurrence of seismically-induced ground-shaking failures in wide regions around the seismogenic source. In fact, the shaking associated with big earthquakes triggers extensive landsliding, sometimes at distances of more than 100 km from the epicenter. The active tectonics and the geomorphic/morphodinamic pattern of the regions affected by earthquakes contribute to the slopes instability tendency. In fact, earthquake-induced groun-motion loading determines inertial forces activation within slopes that, combined with the intrinsic pre-existing static forces, reduces the slope stability towards its failure. Basically, under zero-shear stress reversals conditions, a catastrophic failure will take place if the earthquake-induced shear displacement exceeds the critical level of undrained shear strength to a value equal to the gravitational shear stress. However, seismic stability analyses carried out for various infinite slopes by using the existing Newmark-like methods reveal that estimated permanent displacements smaller than the critical value should also be regarded as dangerous for the post-earthquake slope safety, in terms of human activities use. Earthquake-induced (often high-speed) landslides are among the most destructive phenomena related to slopes failure during earthquakes. In fact, damage from earthquake-induced landslides (and other ground-failures), sometimes exceeds the buildings/infrastructures damage directly related to ground-shaking for fault breaking. For this matter, several hearthquakes-related slope failures methods have been developed, for the evaluation of the combined hazard types represented by seismically ground-motion landslides. The methodologies of analysis of the engineering seismic risk related to the slopes instability processes is often achieved through the evaluation of the permanent displacement potentially induced by an seismic scenario. Such methodologies found on the consideration that the conditions of seismic stability and the post-seismic functionality of engineering structures are tightly related to the entity of the permanent deformations that an earthquake can induce. Regarding the existing simplified procedures among slope stability models, Newmark's model is often used to derive indications about slope instabilities due to earthquakes. In this way, we have evaluated the seismically-induced landslides hazard in Sicily (Central Mediterranean) using the Newmark-like model. In order to determine the map distribution of the seismic ground-acceleration from an earthquake scenario, the attenuation-law of Sabetta & Pugliese has been used, analyzing some seismic recordings occurred in Italy. Also, by evaluating permanent displacements, the correlation of Ambraseys & Menu has been assumed. The seismic shaking slope vulnerability map of Sicily has been carried out using GIS application, also considering max seismic ground-acceleration peak distribution (in terms of exceedance probability for fixed time), slope acclivity, cohesion/angle of internal friction of outcropping rocks, allowing the zoning of the unstable slopes under seismic forces.
Rowe, Charlotte A.; Patton, Howard J.
2015-10-01
Here, we present analyses of the 2D seismic structure beneath Source Physics Experiments (SPE) geophone lines that extended radially at 100 m spacing from 100 to 2000 m from the source borehole. With seismic sources at only one end of the geophone lines, standard refraction profiling methods cannot resolve seismic velocity structures unambiguously. In previous work, we demonstrated overall agreement between body-wave refraction modeling and Rg dispersion curves for the least complex of the five lines. A more detailed inspection supports a 2D reinterpretation of the structure. We obtained Rg phase velocity measurements in both the time and frequency domains,more » then used iterative adjustment of the initial 1D body-wave model to predict Rg dispersion curves to fit the observed values. Our method applied to the most topographically severe of the geophone lines is supplemented with a 2D ray-tracing approach, whose application to P-wave arrivals supports the Rg analysis. In addition, midline sources will allow us to refine our characterization in future work.« less
Abadi, Shima H; Tolstoy, Maya; Wilcock, William S D
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations.
Abadi, Shima H.; Tolstoy, Maya; Wilcock, William S. D.
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations. PMID:28199400
Detection capability of the IMS seismic network based on ambient seismic noise measurements
NASA Astrophysics Data System (ADS)
Gaebler, Peter J.; Ceranna, Lars
2016-04-01
All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection threshold can be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.
Natural time analysis of critical phenomena: The case of pre-fracture electromagnetic emissions
NASA Astrophysics Data System (ADS)
Potirakis, S. M.; Karadimitrakis, A.; Eftaxias, K.
2013-06-01
Criticality of complex systems reveals itself in various ways. One way to monitor a system at critical state is to analyze its observable manifestations using the recently introduced method of natural time. Pre-fracture electromagnetic (EM) emissions, in agreement to laboratory experiments, have been consistently detected in the MHz band prior to significant earthquakes. It has been proposed that these emissions stem from the fracture of the heterogeneous materials surrounding the strong entities (asperities) distributed along the fault, preventing the relative slipping. It has also been proposed that the fracture of heterogeneous material could be described in analogy to the critical phase transitions in statistical physics. In this work, the natural time analysis is for the first time applied to the pre-fracture MHz EM signals revealing their critical nature. Seismicity and pre-fracture EM emissions should be two sides of the same coin concerning the earthquake generation process. Therefore, we also examine the corresponding foreshock seismic activity, as another manifestation of the same complex system at critical state. We conclude that the foreshock seismicity data present criticality features as well.
Optimization of Regional Geodynamic Models for Mantle Dynamics
NASA Astrophysics Data System (ADS)
Knepley, M.; Isaac, T.; Jadamec, M. A.
2016-12-01
The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.