Sample records for seismic screening method

  1. LANL seismic screening method for existing buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, S.L.; Feller, K.C.; Fritz de la Orta, G.O.

    1997-01-01

    The purpose of the Los Alamos National Laboratory (LANL) Seismic Screening Method is to provide a comprehensive, rational, and inexpensive method for evaluating the relative seismic integrity of a large building inventory using substantial life-safety as the minimum goal. The substantial life-safety goal is deemed to be satisfied if the extent of structural damage or nonstructural component damage does not pose a significant risk to human life. The screening is limited to Performance Category (PC) -0, -1, and -2 buildings and structures. Because of their higher performance objectives, PC-3 and PC-4 buildings automatically fail the LANL Seismic Screening Method andmore » will be subject to a more detailed seismic analysis. The Laboratory has also designated that PC-0, PC-1, and PC-2 unreinforced masonry bearing wall and masonry infill shear wall buildings fail the LANL Seismic Screening Method because of their historically poor seismic performance or complex behavior. These building types are also recommended for a more detailed seismic analysis. The results of the LANL Seismic Screening Method are expressed in terms of separate scores for potential configuration or physical hazards (Phase One) and calculated capacity/demand ratios (Phase Two). This two-phase method allows the user to quickly identify buildings that have adequate seismic characteristics and structural capacity and screen them out from further evaluation. The resulting scores also provide a ranking of those buildings found to be inadequate. Thus, buildings not passing the screening can be rationally prioritized for further evaluation. For the purpose of complying with Executive Order 12941, the buildings failing the LANL Seismic Screening Method are deemed to have seismic deficiencies, and cost estimates for mitigation must be prepared. Mitigation techniques and cost-estimate guidelines are not included in the LANL Seismic Screening Method.« less

  2. A proposal for seismic evaluation index of mid-rise existing RC buildings in Afghanistan

    NASA Astrophysics Data System (ADS)

    Naqi, Ahmad; Saito, Taiki

    2017-10-01

    Mid-rise RC buildings gradually rise in Kabul and entire Afghanistan since 2001 due to rapid increase of population. To protect the safety of resident, Afghan Structure Code was issued in 2012. But the building constructed before 2012 failed to conform the code requirements. In Japan, new sets of rules and law for seismic design of buildings had been issued in 1981 and severe earthquake damage was disclosed for the buildings designed before 1981. Hence, the Standard for Seismic Evaluation of RC Building published in 1977 has been widely used in Japan to evaluate the seismic capacity of existing buildings designed before 1981. Currently similar problem existed in Afghanistan, therefore, this research examined the seismic capacity of six RC buildings which were built before 2012 in Kabul by applying the seismic screening procedure presented by Japanese standard. Among three screening procedures with different capability, the less detailed screening procedure, the first level of screening, is applied. The study founds an average seismic index (IS-average=0.21) of target buildings. Then, the results were compared with those of more accurate seismic evaluation procedures of Capacity Spectrum Method (CSM) and Time History Analysis (THA). The results for CSM and THA show poor seismic performance of target buildings not able to satisfy the safety design limit (1/100) of the maximum story drift. The target buildings are then improved by installing RC shear walls. The seismic indices of these retrofitted buildings were recalculated and the maximum story drifts were analyzed by CSM and THA. The seismic indices and CSM and THA results are compared and found that building with seismic index larger than (IS-average =0.4) are able to satisfy the safety design limit. Finally, to screen and minimize the earthquake damage over the existing buildings, the judgement seismic index (IS-Judgment=0.5) for the first level of screening is proposed.

  3. SeismoDome: Sonic and visual representation of earthquakes and seismic waves in the planetarium

    NASA Astrophysics Data System (ADS)

    Holtzman, B. K.; Candler, J.; Repetto, D.; Pratt, M. J.; Paté, A.; Turk, M.; Gualtieri, L.; Peter, D. B.; Trakinski, V.; Ebel, D. S. S.; Gossmann, J.; Lem, N.

    2017-12-01

    Since 2014, we have produced four "Seismodome" public programs in the Hayden Planetarium at the American Museum of Natural History in New York City. To teach the general public about the dynamics of the Earth, we use a range of seismic data (seismicity catalogs, surface and body wave fields, ambient noise, free oscillations) to generate movies and sounds conveying aspects of the physics of earthquakes and seismic waves. The narrative aims to stretch people's sense of time and scale, starting with 2 billion years of convection, then zooming in seismicity over days to twenty years at different length scales, to hours of global seismic wave propagation, all compressed to minute long movies. To optimize the experience in the planetarium, the 180-degree fisheye screen corresponds directly to the surface of the Earth, such that the audience is inside the planet. The program consists of three main elements (1) Using sonified and animated seismicity catalogs, comparison of several years of earthquakes on different plate boundaries conveys the dramatic differences in their dynamics and the nature of great and "normal" earthquakes. (2) Animations of USArray data (based on "Ground Motion Visualizations" methods from IRIS but in 3D, with added sound) convey the basic observations of seismic wave fields, with which we raise questions about what they tell us about earthquake physics and the Earth's interior structure. (3) Movies of spectral element simulations of global seismic wave fields synchronized with sonified natural data push these questions further, especially when viewed from the interior of the planet. Other elements include (4) sounds of the global ambient noise field coupled to movies of mean ocean wave height (related to the noise source) and (5) three months of free oscillations / normal modes ringing after the Tohoku earthquake. We use and develop a wide range of sonification and animation methods, written mostly in python. Flat-screen versions of these movies are available on the Seismic Sound Lab (LDEO) website. Here, we will present a subset of the methods an overview of the aims of the program.

  4. A Technique to Determine the Self-Noise of Seismic Sensors for Performance Screening

    NASA Astrophysics Data System (ADS)

    Rademacher, H.; Hart, D.; Guralp, C.

    2012-04-01

    Seismic noise affects the performance of a seismic sensor and is thereby a limiting factor for the detection threshold of monitoring networks. Among the various sources of noise, the intrinsic self-noise of a seismic sensor is most diffcult to determine, because it is mostly masked by natural and anthropogenic ground noise and is also affected by the noise characteristic of the digitizer. Here we present a new technique to determine the self-noise of a seismic system (digitizer + sensors). It is based on a method introduced by Sleeman et al. (2005) to test the noise performance of digitizers. We infer the self-noise of a triplet of identical sensors by comparing coherent waveforms over a wide spectral band across the set-up. We will show first results from a proof-of-concept study done in a vault near Albuquerque, New Mexico. We will show, how various methods of shielding the sensors affect the results of this technique. This method can also be used as a means of quality control during sensor production, because poorly performing sensors can easily be identified.

  5. Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis

    NASA Astrophysics Data System (ADS)

    Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.

    2014-03-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.

  6. Wave Propagation, Scattering and Imaging Using Dual-domain One-way and One-return Propagators

    NASA Astrophysics Data System (ADS)

    Wu, R.-S.

    - Dual-domain one-way propagators implement wave propagation in heterogeneous media in mixed domains (space-wavenumber domains). One-way propagators neglect wave reverberations between heterogeneities but correctly handle the forward multiple-scattering including focusing/defocusing, diffraction, refraction and interference of waves. The algorithm shuttles between space-domain and wavenumber-domain using FFT, and the operations in the two domains are self-adaptive to the complexity of the media. The method makes the best use of the operations in each domain, resulting in efficient and accurate propagators. Due to recent progress, new versions of dual-domain methods overcame some limitations of the classical dual-domain methods (phase-screen or split-step Fourier methods) and can propagate large-angle waves quite accurately in media with strong velocity contrasts. These methods can deliver superior image quality (high resolution/high fidelity) for complex subsurface structures. One-way and one-return (De Wolf approximation) propagators can be also applied to wave-field modeling and simulations for some geophysical problems. In the article, a historical review and theoretical analysis of the Born, Rytov, and De Wolf approximations are given. A review on classical phase-screen or split-step Fourier methods is also given, followed by a summary and analysis of the new dual-domain propagators. The applications of the new propagators to seismic imaging and modeling are reviewed with several examples. For seismic imaging, the advantages and limitations of the traditional Kirchhoff migration and time-space domain finite-difference migration, when applied to 3-D complicated structures, are first analyzed. Then the special features, and applications of the new dual-domain methods are presented. Three versions of GSP (generalized screen propagators), the hybrid pseudo-screen, the wide-angle Padé-screen, and the higher-order generalized screen propagators are discussed. Recent progress also makes it possible to use the dual-domain propagators for modeling elastic reflections for complex structures and long-range propagations of crustal guided waves. Examples of 2-D and 3-D imaging and modeling using GSP methods are given.

  7. Review on Rapid Seismic Vulnerability Assessment for Bulk of Buildings

    NASA Astrophysics Data System (ADS)

    Nanda, R. P.; Majhi, D. R.

    2013-09-01

    This paper provides a brief overview of rapid visual screening (RVS) procedures available in different countries with a comparison among all the methods. Seismic evaluation guidelines from, USA, Canada, Japan, New Zealand, India, Europe, Italy, UNDP, with other methods are reviewed from the perspective of their applicability to developing countries. The review shows clearly that some of the RVS procedures are unsuited for potential use in developing countries. It is expected that this comparative assessment of various evaluation schemes will help to identify the most essential components of such a procedure for use in India and other developing countries, which is not only robust, reliable but also easy to use with available resources. It appears that Federal Emergency Management Agency (FEMA) 154 and New Zealand Draft Code approaches can be suitably combined to develop a transparent, reasonably rigorous and generalized procedure for seismic evaluation of buildings in developing countries.

  8. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  9. Preliminary seismic evaluation and ranking of bridges on and over the parkways in Western Kentucky.

    DOT National Transportation Integrated Search

    2008-06-01

    Five parkways in Western Kentucky are located in the region that is greatly influenced by the New Madrid and Wabash Valley Seismic Zones. This report executes a preliminary screening process, known also as the Seismic Rating System, for bridges on an...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua D.; Hartse, Hans

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  11. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  12. Seismic verification of nuclear plant equipment anchorage: Volume 1, Development of anchorage guidelines: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, R M

    1987-05-01

    Guidelines have been developed to evaluate the seismic adequacy of the anchorage of various classes of electrical and mechanical equipment in nuclear power plants covered by NRC Unresolved Safety Issue A-46. The guidelines consist of screening tables that give the seismic anchorage capacity as a function of key equipment and anchorage fasteners, inspection checklists for field verification of anchorage adequacy, and provisions for outliers that can be used to further investigate anchorages that cannot be verified in the field. The screening tables are based on an analysis of the anchorage forces developed by common equipment types and on strength criteriamore » to quantify the holding power of anchor bolts and welds. The strength criteria for expansion anchor bolts were developed by collecting and analyzing a large quantity of test data.« less

  13. Seismic verification of nuclear plant equipment anchorage: Volume 2, Anchorage inspection workbook: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, R M

    1987-05-01

    Guidelines have been developed to evaluate the seismic adequacy of the anchorage of various classes of electrical and mechanical equipment in nuclear power plants covered by NRC Unresolved Safety Issue A-46. The guidelines consist of screening tables that give the seismic anchorage capacity as a function of key equipment and anchorage fasteners, inspection checklists for field verification of anchorage adequacy, and provisions for outliers that can be used to further investigate anchorages that cannot be verified in the field. The screening tables are based on an analysis of the anchorage forces developed by common equipment types and on strength criteriamore » to quantify the holding power of anchor bolts and welds. The strength criteria for expansion anchor bolts were developed by collecting and analyzing a large quantity of test data.« less

  14. Experimental Seismic Event-screening Criteria at the Prototype International Data Center

    NASA Astrophysics Data System (ADS)

    Fisk, M. D.; Jepsen, D.; Murphy, J. R.

    - Experimental seismic event-screening capabilities are described, based on the difference of body-and surface-wave magnitudes (denoted as Ms:mb) and event depth. These capabilities have been implemented and tested at the prototype International Data Center (PIDC), based on recommendations by the IDC Technical Experts on Event Screening in June 1998. Screening scores are presented that indicate numerically the degree to which an event meets, or does not meet, the Ms:mb and depth screening criteria. Seismic events are also categorized as onshore, offshore, or mixed, based on their 90% location error ellipses and an onshore/offshore grid with five-minute resolution, although this analysis is not used at this time to screen out events.Results are presented of applications to almost 42,000 events with mb>=3.5 in the PIDC Standard Event Bulletin (SEB) and to 121 underground nuclear explosions (UNE's) at the U.S. Nevada Test Site (NTS), the Semipalatinsk and Novaya Zemlya test sites in the Former Soviet Union, the Lop Nor test site in China, and the Indian, Pakistan, and French Polynesian test sites. The screening criteria appear to be quite conservative. None of the known UNE's are screened out, while about 41 percent of the presumed earthquakes in the SEB with mb>=3.5 are screened out. UNE's at the Lop Nor, Indian, and Pakistan test sites on 8 June 1996, 11 May 1998, and 28 May 1998, respectively, have among the lowest Ms:mb scores of all events in the SEB.To assess the validity of the depth screening results, comparisons are presented of SEB depth solutions to those in other bulletins that are presumed to be reliable and independent. Using over 1600 events, the comparisons indicate that the SEB depth confidence intervals are consistent with or shallower than over 99.8 percent of the corresponding depth estimates in the other bulletins. Concluding remarks are provided regarding the performance of the experimental event-screening criteria, and plans for future improvements, based on recent recommendations by the IDC Technical Experts on Event Screening in May 1999.

  15. Screening guide for rapid assessment of liquefaction hazard at highway bridge sites

    DOT National Transportation Integrated Search

    1998-06-16

    As an aid to seismic hazard assessment, this report provides a "screening guide" for systematic evaluation of liquefactin hazard at bridge sites and a guide for prioritizing sites for further investigation or mitigation. The guide presents a systemat...

  16. Multi-method Near-surface Geophysical Surveys for Site Response and Earthquake Damage Assessments at School Sites in Washington, USA

    NASA Astrophysics Data System (ADS)

    Cakir, R.; Walsh, T. J.; Norman, D. K.

    2017-12-01

    We, Washington Geological Survey (WGS), have been performing multi-method near surface geophysical surveys to help assess potential earthquake damage at public schools in Washington. We have been conducting active and passive seismic surveys, and estimating Shear-wave velocity (Vs) profiles, then determining the NEHRP soil classifications based on Vs30m values at school sites in Washington. The survey methods we have used: 1D and 2D MASW and MAM, P- and S-wave refraction, horizontal-to-vertical spectral ratio (H/V), and 2ST-SPAC to measure Vs and Vp at shallow (0-70m) and greater depths at the sites. We have also run Ground Penetrating Radar (GPR) surveys at the sites to check possible horizontal subsurface variations along and between the seismic survey lines and the actual locations of the school buildings. The seismic survey results were then used to calculate Vs30m for determining the NEHRP soil classifications at school sites, thus soil amplification effects on the ground motions. Resulting shear-wave velocity profiles generated from these studies can also be used for site response and liquefaction potential studies, as well as for improvement efforts of the national Vs30m database, essential information for ShakeMap and ground motion modeling efforts in Washington and Pacific Northwest. To estimate casualties, nonstructural, and structural losses caused by the potential earthquakes in the region, we used these seismic site characterization results associated with structural engineering evaluations based on ASCE41 or FEMA 154 (Rapid Visual Screening) as inputs in FEMA Hazus-Advanced Engineering Building Module (AEBM) analysis. Compelling example surveys will be presented for the school sites in western and eastern Washington.

  17. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as amore » three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.« less

  18. Gas Hydrate Characterization from a 3D Seismic Dataset in the Eastern Deepwater Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Dan

    The presence of a gas hydrate petroleum system and seismic attributes derived from 3D seismic data are used for the identification and characterization of gas hydrate deposits in the deepwater eastern Gulf of Mexico. In the central deepwater Gulf of Mexico (GoM), logging while drilling (LWD) data provided insight to the amplitude response of gas hydrate saturation in sands, which could be used to characterize complex gas hydrate deposits in other sandy deposits. In this study, a large 3D seismic data set from equivalent and distal Plio-Pleistocene sandy channel deposits in the deepwater eastern Gulf of Mexico is screened formore » direct hydrocarbon indicators for gas hydrate saturated sands.« less

  19. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    NASA Astrophysics Data System (ADS)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  20. Delineation of voided and hydrocarbon contaminated regions with REDEM and STI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteley, B.

    1997-10-01

    Undetected voids and cavernous regions at shallow depth are a significant geotechnical and environmental hazard if they are filled or act as conduits for pollutants, particularly for LNAPL and DNAPL contaminants. Such features are often difficult to locate with drilling and conventional geophysical methods including resistivity, electromagnetics, microgravity, seismic and ground penetrating radar when they occur in industrial or urban areas where electrical and vibrational interference can combine with subsurface complexity due to human action to severely degrade geophysical data quality. A new geophysical method called Radiowave Diffraction Electromagnetics (RDEM) has proved successful for rapid screening of difficult sites andmore » for the delineation of buried sinkholes, cavities and hydrocarbon plumes. RDEM operates with a null coupled coil configuration at about 1.6 MHZ and is relatively insensitive to electrical interference and surrounding metal objects. It responds to subsurface variations in both conductivity and dielectric constant. Voided and contaminated regions can be more fully detailed when RDEM is combined with Seismic Tomographic Imaging (STI) from follow-up boreholes. Case studies from sites in Australia and South East Asia demonstrate the application of RDEM and STI and the value in combining both methods.« less

  1. Gas hydrate characterization from a 3D seismic dataset in the deepwater eastern Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Daniel; Haneberg, William C.

    Principal component analysis of spectral decomposition results combined with amplitude and frequency seismic attributes derived from 3D seismic data are used for the identification and characterization of gas hydrate deposits in the deepwater eastern Gulf of Mexico. In the central deepwater Gulf of Mexico (GoM), logging while drilling LWD data provided insight to the amplitude response of gas hydrate saturation in sands, which could be used to characterize complex gas hydrate deposits in other sandy deposits. In this study, a large 3D seismic data set from equivalent and distal Plio Pleistocene sandy channel deposits in the deepwater eastern Gulf ofmore » Mexico is screened for direct hydrocarbon indicators for gas hydrate saturated sands.« less

  2. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  3. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  4. Vesuvius: Earthquakes from 1600 up to the 1631 eruption

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Mariotti, Dante

    2011-03-01

    This study examines the seismicity of Vesuvius in the decades leading up to the great eruption of 16th December 1631. The period 1600-1631 is analyzed with the aims to point out any long-term seismic precursor of the eruption. The historical research has focused on contemporary Neapolitan memoirs and a large screening of diplomatic correspondence from the main Italian courts of the age (Florence, Mantua, Parma, Venice and the Vatican). Information was gathered on 18 earthquakes that were felt in Naples between 1601 and 1630. These data were listed with the sequence of 34 shocks that took place in November and December 1631, that preceded the beginning of the eruption. The 52 seismic events that have been highlighted overall are unknown in the parametric catalogues of Italian historical seismicity and 17 are unknown even in the scientific literature. The authors' view is that it makes little sense to talk of one single previous seismic precursor in this case, given the frequent seismic sequences and tremors noted by contemporaries from January 1616 onwards. The present state of knowledge suggests that seismic activity is a strong, early and persistent warning sign of an eruption of Vesuvius, of the same type as that of December 1631.

  5. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as amore » three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC`s PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.« less

  6. Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method

    NASA Astrophysics Data System (ADS)

    Bin Mohamad, Edy Tonnizam; Saad, Rosli; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia

    2010-12-01

    Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.

  7. Method of migrating seismic records

    DOEpatents

    Ober, Curtis C.; Romero, Louis A.; Ghiglia, Dennis C.

    2000-01-01

    The present invention provides a method of migrating seismic records that retains the information in the seismic records and allows migration with significant reductions in computing cost. The present invention comprises phase encoding seismic records and combining the encoded seismic records before migration. Phase encoding can minimize the effect of unwanted cross terms while still allowing significant reductions in the cost to migrate a number of seismic records.

  8. Passive seismic imaging based on seismic interferometry: method and its application to image the structure around the 2013 Mw6.6 Lushan earthquake

    NASA Astrophysics Data System (ADS)

    Gu, N.; Zhang, H.

    2017-12-01

    Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.

  9. Estimating Local and Near-Regional Velocity and Attenuation Structure from Seismic Noise

    DTIC Science & Technology

    2008-09-30

    seismic array in Costa Rica and Nicaragua from ambient seismic noise using two independent methods, noise cross correlation and beamforming. The noise...Mean-phase velocity-dispersion curves are calculated for the TUCAN seismic array in Costa Rica and Nicaragua from ambient seismic noise using two...stations of the TUCAN seismic array (Figure 4c) using a method similar to Harmon et al. (2007). Variations from Harmon et al. (2007) include removing the

  10. Study on the application of ambient vibration tests to evaluate the effectiveness of seismic retrofitting

    NASA Astrophysics Data System (ADS)

    Liang, Li; Takaaki, Ohkubo; Guang-hui, Li

    2018-03-01

    In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.

  11. Nuclear Explosion Monitoring Advances and Challenges

    NASA Astrophysics Data System (ADS)

    Baker, G. E.

    2015-12-01

    We address the state-of-the-art in areas important to monitoring, current challenges, specific efforts that illustrate approaches addressing shortcomings in capabilities, and additional approaches that might be helpful. The exponential increase in the number of events that must be screened as magnitude thresholds decrease presents one of the greatest challenges. Ongoing efforts to exploit repeat seismic events using waveform correlation, subspace methods, and empirical matched field processing holds as much "game-changing" promise as anything being done, and further efforts to develop and apply such methods efficiently are critical. Greater accuracy of travel time, signal loss, and full waveform predictions are still needed to better locate and discriminate seismic events. Important developments include methods to model velocities using multiple types of data; to model attenuation with better separation of source, path, and site effects; and to model focusing and defocusing of surface waves. Current efforts to model higher frequency full waveforms are likely to improve source characterization while more effective estimation of attenuation from ambient noise holds promise for filling in gaps. Censoring in attenuation modeling is a critical problem to address. Quantifying uncertainty of discriminants is key to their operational use. Efforts to do so for moment tensor (MT) inversion are particularly important, and fundamental progress on the statistics of MT distributions is the most important advance needed in the near term in this area. Source physics is seeing great progress through theoretical, experimental, and simulation studies. The biggest need is to accurately predict the effects of source conditions on seismic generation. Uniqueness is the challenge here. Progress will depend on studies that probe what distinguishes mechanisms, rather than whether one of many possible mechanisms is consistent with some set of observations.

  12. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  13. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  14. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  15. Seismic Methods

    EPA Pesticide Factsheets

    Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.

  16. Seismic Reflection Methods

    EPA Pesticide Factsheets

    Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.

  17. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlangga, Mokhammad Puput

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less

  18. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  19. Signal-to-noise ratio application to seismic marker analysis and fracture detection

    NASA Astrophysics Data System (ADS)

    Xu, Hui-Qun; Gui, Zhi-Xian

    2014-03-01

    Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.

  20. Seismic facies analysis based on self-organizing map and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian

    2015-01-01

    Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.

  1. Rippability Assessment of Weathered Sedimentary Rock Mass using Seismic Refraction Methods

    NASA Astrophysics Data System (ADS)

    Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.

    2018-04-01

    Rippability or ease of excavation in sedimentary rocks is a significant aspect of the preliminary work of any civil engineering project. Rippability assessment was performed in this study to select an available ripping machine to rip off earth materials using the seismic velocity chart provided by Caterpillar. The research area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. The research was aimed at obtaining seismic velocity, P-wave (Vp) using a seismic refraction method to produce a 2D tomography model. A 2D seismic model was used to delineate the layers into the velocity profile. The conventional geotechnical method of using a borehole was integrated with the seismic velocity method to provide appropriate correlation. The correlated data can be used to categorize machineries for excavation activities based on the available systematic analysis procedure to predict rock rippability. The seismic velocity profile obtained was used to interpret rock layers within the ranges labelled as rippable, marginal, and non-rippable. Based on the seismic velocity method the site can be classified into loose sand stone to moderately weathered rock. Laboratory test results shows that the site’s rock material falls between low strength and high strength. Results suggest that Caterpillar’s smallest ripper, namely, D8R, can successfully excavate materials based on the test results integration from seismic velocity method and laboratory test.

  2. Accurately determining direction of arrival by seismic array based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Hu, J.; Zhang, H.; Yu, H.

    2016-12-01

    Seismic array analysis method plays an important role in detecting weak signals and determining their locations and rupturing process. In these applications, reliably estimating direction of arrival (DOA) for the seismic wave is very important. DOA is generally determined by the conventional beamforming method (CBM) [Rost et al, 2000]. However, for a fixed seismic array generally the resolution of CBM is poor in the case of low-frequency seismic signals, and in the case of high frequency seismic signals the CBM may produce many local peaks, making it difficult to pick the one corresponding to true DOA. In this study, we develop a new seismic array method based on compressive sensing (CS) to determine the DOA with high resolution for both low- and high-frequency seismic signals. The new method takes advantage of the space sparsity of the incoming wavefronts. The CS method has been successfully used to determine spatial and temporal earthquake rupturing distributions with seismic array [Yao et al, 2011;Yao et al, 2013;Yin 2016]. In this method, we first form the problem of solving the DOA as a L1-norm minimization problem. The measurement matrix for CS is constructed by dividing the slowness-angle domain into many grid nodes, which needs to satisfy restricted isometry property (RIP) for optimized reconstruction of the image. The L1-norm minimization is solved by the interior point method. We first test the CS-based DOA array determination method on synthetic data constructed based on Shanghai seismic array. Compared to the CBM, synthetic test for data without noise shows that the new method can determine the true DOA with a super-high resolution. In the case of multiple sources, the new method can easily separate multiple DOAs. When data are contaminated by noise at various levels, the CS method is stable when the noise amplitude is lower than the signal amplitude. We also test the CS method for the Wenchuan earthquake. For different arrays with different apertures, we are able to obtain reliable DOAs with uncertainties lower than 10 degrees.

  3. Mini-Sosie high-resolution seismic method aids hazards studies

    USGS Publications Warehouse

    Stephenson, W.J.; Odum, J.; Shedlock, K.M.; Pratt, T.L.; Williams, R.A.

    1992-01-01

    The Mini-Sosie high-resolution seismic method has been effective in imaging shallow-structure and stratigraphic features that aid in seismic-hazard and neotectonic studies. The method is not an alternative to Vibroseis acquisition for large-scale studies. However, it has two major advantages over Vibroseis as it is being used by the USGS in its seismic-hazards program. First, the sources are extremely portable and can be used in both rural and urban environments. Second, the shifting-and-summation process during acquisition improves the signal-to-noise ratio and cancels out seismic noise sources such as cars and pedestrians. -from Authors

  4. Precursory seismic quiescence along the Sumatra-Andaman subduction zone: past and present

    NASA Astrophysics Data System (ADS)

    Sukrungsri, Santawat; Pailoplee, Santi

    2017-03-01

    In this study, the seismic quiescence prior to hazardous earthquakes was analyzed along the Sumatra-Andaman subduction zone (SASZ). The seismicity data were screened statistically with mainshock earthquakes of M w ≥ 4.4 reported during 1980-2015 being defined as the completeness database. In order to examine the possibility of using the seismic quiescence stage as a marker of subsequent earthquakes, the seismicity data reported prior to the eight major earthquakes along the SASZ were analyzed for changes in their seismicity rate using the statistical Z test. Iterative tests revealed that Z factors of N = 50 events and T = 2 years were optimal for detecting sudden rate changes such as quiescence and to map these spatially. The observed quiescence periods conformed to the subsequent major earthquake occurrences both spatially and temporally. Using suitable conditions obtained from successive retrospective tests, the seismicity rate changes were then mapped from the most up-to-date seismicity data available. This revealed three areas along the SASZ that might generate a major earthquake in the future: (i) Nicobar Islands ( Z = 6.7), (ii) the western offshore side of Sumatra Island ( Z = 7.1), and (iii) western Myanmar ( Z = 6.7). The performance of a stochastic test using a number of synthetic randomized catalogues indicated these levels of anomalous Z value showed the above anomaly is unlikely due to chance or random fluctuations of the earthquake. Thus, these three areas have a high possibility of generating a strong-to-major earthquake in the future.

  5. Improved Simplified Methods for Effective Seismic Analysis and Design of Isolated and Damped Bridges in Western and Eastern North America

    NASA Astrophysics Data System (ADS)

    Koval, Viacheslav

    The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.

  6. Modeling Poroelastic Wave Propagation in a Real 2-D Complex Geological Structure Obtained via Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2018-03-01

    Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.

  7. Redistribution Principle Approach for Evaluation of Seismic Active Earth Pressure Behind Retaining Wall

    NASA Astrophysics Data System (ADS)

    Maskar, A. D.; Madhekar, S. N.; Phatak, D. R.

    2017-11-01

    The knowledge of seismic active earth pressure behind the rigid retaining wall is very essential in the design of retaining wall in earthquake prone regions. Commonly used Mononobe-Okabe (MO) method considers pseudo-static approach. Recently there are many pseudo-dynamic methods used to evaluate the seismic earth pressure. However, available pseudo-static and pseudo-dynamic methods do not incorporate the effect of wall movement on the earth pressure distribution. Dubrova (Interaction between soils and structures, Rechnoi Transport, Moscow, 1963) was the first, who considered such effect and till date, it is used for cohesionless soil, without considering the effect of seismicity. In this paper, Dubrova's model based on redistribution principle, considering the seismic effect has been developed. It is further used to compute the distribution of seismic active earth pressure, in a more realistic manner, by considering the effect of wall movement on the earth pressure, as it is displacement based method. The effects of a wide range of parameters like soil friction angle (ϕ), wall friction angle (δ), horizontal and vertical seismic acceleration coefficients (kh and kv); on seismic active earth pressure (Kae) have been studied. Results are presented for comparison of pseudo-static and pseudo-dynamic methods, to highlight the realistic, non-linearity of seismic active earth pressure distribution. The current study results in the variation of Kae with kh in the same manner as that of MO method and Choudhury and Nimbalkar (Geotech Geol Eng 24(5):1103-1113, 2006) study. To increase in ϕ, there is a reduction in static as well as seismic earth pressure. Also, by keeping constant ϕ value, as kh increases from 0 to 0.3, earth pressure increases; whereas as δ increases, active earth pressure decreases. The seismic active earth pressure coefficient (Kae) obtained from the present study is approximately same as that obtained by previous researchers. Though seismic earth pressure obtained by pseudo-dynamic approach and seismic earth pressure obtained by redistribution principle have different background of formulation, the final earth pressure distribution is approximately same.

  8. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  9. Stochastic seismic inversion based on an improved local gradual deformation method

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Zhu, Peimin

    2017-12-01

    A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.

  10. Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction

    NASA Astrophysics Data System (ADS)

    Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo

    2014-12-01

    To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.

  11. Seismic aftershock monitoring for on-site inspection purposes. Experience from Integrated Field Exercise 2008.

    NASA Astrophysics Data System (ADS)

    Labak, P.; Arndt, R.; Villagran, M.

    2009-04-01

    One of the sub-goals of the Integrated Field Experiment in 2008 (IFE08) in Kazakhstan was testing the prototype elements of the Seismic aftershock monitoring system (SAMS) for on-site inspection purposes. The task of the SAMS is to collect the facts, which should help to clarify nature of the triggering event. Therefore the SAMS has to be capable to detect and identify events as small as magnitude -2 in the inspection area size up to 1000 km2. Equipment for 30 mini-arrays and 10 3-component stations represented the field equipment of the SAMS. Each mini-array consisted of a central 3-component seismometer and 3 vertical seismometers at the distance about 100 m from the central seismometer. The mini-arrays covered approximately 80% of surrogate inspection area (IA) on the territory of former Semipalatinsk test site. Most of the stations were installed during the first four days of field operations by the seismic sub-team, which consisted of 10 seismologists. SAMS data center comprised 2 IBM Blade centers and 8 working places for data archiving, detection list production and event analysis. A prototype of SAMS software was tested. Average daily amount of collected raw data was 15-30 GB and increased according to the amount of stations entering operation. Routine manual data screening and data analyses were performed by 2-6 subteam members. Automatic screening was used for selected time intervals. Screening was performed using the Sonoview program in frequency domain and using the Geotool and Hypolines programs for screening in time domain. The screening results were merged into the master event list. The master event list served as a basis of detailed analysis of unclear events and events identified to be potentially in the IA. Detailed analysis of events to be potentially in the IA was performed by the Hypoline and Geotool programs. In addition, the Hyposimplex and Hypocenter programs were also used for localization of events. The results of analysis were integrated in the visual form using the Seistrain/geosearch program. Data were fully screened for the period 5.-13.9.2008. 360 teleseismic, regional and local events were identified. Results of the detection and analysis will be presented and consequences for further SAMS development will be discussed.

  12. Application of seismic interferometric migration for shallow seismic high precision data processing: A case study in the Shenhu area

    NASA Astrophysics Data System (ADS)

    Wei, Jia; Liu, Huaishan; Xing, Lei; Du, Dong

    2018-02-01

    The stability of submarine geological structures has a crucial influence on the construction of offshore engineering projects and the exploitation of seabed resources. Marine geologists should possess a detailed understanding of common submarine geological hazards. Current marine seismic exploration methods are based on the most effective detection technologies. Therefore, current research focuses on improving the resolution and precision of shallow stratum structure detection methods. In this article, the feasibility of shallow seismic structure imaging is assessed by building a complex model, and differences between the seismic interferometry imaging method and the traditional imaging method are discussed. The imaging effect of the model is better for shallow layers than for deep layers because coherent noise produced by this method can result in an unsatisfactory imaging effect for deep layers. The seismic interference method has certain advantages for geological structural imaging of shallow submarine strata, which indicates continuous horizontal events, a high resolution, a clear fault, and an obvious structure boundary. The effects of the actual data applied to the Shenhu area can fully illustrate the advantages of the method. Thus, this method has the potential to provide new insights for shallow submarine strata imaging in the area.

  13. Impacts of potential seismic landslides on lifeline corridors.

    DOT National Transportation Integrated Search

    2015-02-01

    This report presents a fully probabilistic method for regional seismically induced landslide hazard analysis and : mapping. The method considers the most current predictions for strong ground motions and seismic sources : through use of the U.S.G.S. ...

  14. Fast 3D elastic micro-seismic source location using new GPU features

    NASA Astrophysics Data System (ADS)

    Xue, Qingfeng; Wang, Yibo; Chang, Xu

    2016-12-01

    In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.

  15. Could the IMS Infrasound Stations Support a Global Network of Small Aperture Seismic Arrays?

    NASA Astrophysics Data System (ADS)

    J, Gibbons, Steven; Kværna, Tormod; Mykkeltveit, Svein

    2015-04-01

    The infrasound stations of the International Monitoring System are arrays consisting of up to 15 sites and with apertures of up to 3 km. The arrays are distributed remarkably uniformly over the globe and provide excellent coverage of South America, Africa, and Antarctica. This is to say that there are many infrasound arrays in regions many thousands of kilometers from the closest seismic array. Several infrasound arrays are in the immediate vicinity of existing 3-component seismic stations and these provide us with examples of how typical seismic signals look at these locations. We can make idealized estimates of the predicted performance of seismic arrays, consisting of seismometers at each site of the infrasound arrays, by duplicating the signals from the 3-C stations at all sites of the array. However, the true performance of seismic arrays at these sites will depend both upon Signal-to-Noise Ratios of seismic signals and the coherence of both signal and noise between sensors. These properties can only be determined experimentally. Recording seismic data of sufficient quality at many of these arrays may require borehole deployments since the microbarometers in the infrasound arrays are often situated in vaults placed in soft sediments. The geometries of all the current IMS infrasound arrays are examined and compared and we demonstrate that, from a purely geometrical perspective, essentially all the array configurations would provide seismic arrays with acceptable slowness resolution for both regional and teleseismic phase arrivals. Seismic arrays co-located with the infrasound arrays in many regions would likely enhance significantly the seismic monitoring capability in parts of the world where only 3-component stations are currently available. Co-locating seismic and infrasound sensors would facilitate the development of seismic arrays that share the infrastructure of the infrasound arrays, reducing the development and operational costs. Hosting countries might find such added capabilities valuable from a national perspective. In addition, the seismic recordings may also help to identify the sources of infrasound signals with consequences for improved event screening and evaluating models of infrasound propagation and atmospheric properties.

  16. A seismic fault recognition method based on ant colony optimization

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  17. Geophysical remote sensing of water reservoirs suitable for desalinization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David Franklin; Bartel, Lewis Clark; Bonal, Nedra

    2009-12-01

    In many parts of the United States, as well as other regions of the world, competing demands for fresh water or water suitable for desalination are outstripping sustainable supplies. In these areas, new water supplies are necessary to sustain economic development and agricultural uses, as well as support expanding populations, particularly in the Southwestern United States. Increasing the supply of water will more than likely come through desalinization of water reservoirs that are not suitable for present use. Surface-deployed seismic and electromagnetic (EM) methods have the potential for addressing these critical issues within large volumes of an aquifer at amore » lower cost than drilling and sampling. However, for detailed analysis of the water quality, some sampling utilizing boreholes would be required with geophysical methods being employed to extrapolate these sampled results to non-sampled regions of the aquifer. The research in this report addresses using seismic and EM methods in two complimentary ways to aid in the identification of water reservoirs that are suitable for desalinization. The first method uses the seismic data to constrain the earth structure so that detailed EM modeling can estimate the pore water conductivity, and hence the salinity. The second method utilizes the coupling of seismic and EM waves through the seismo-electric (conversion of seismic energy to electrical energy) and the electro-seismic (conversion of electrical energy to seismic energy) to estimate the salinity of the target aquifer. Analytic 1D solutions to coupled pressure and electric wave propagation demonstrate the types of waves one expects when using a seismic or electric source. A 2D seismo-electric/electro-seismic is developed to demonstrate the coupled seismic and EM system. For finite-difference modeling, the seismic and EM wave propagation algorithms are on different spatial and temporal scales. We present a method to solve multiple, finite-difference physics problems that has application beyond the present use. A limited field experiment was conducted to assess the seismo-electric effect. Due to a variety of problems, the observation of the electric field due to a seismic source is not definitive.« less

  18. Variational Bayesian Inversion of Quasi-Localized Seismic Attributes for the Spatial Distribution of Geological Facies

    NASA Astrophysics Data System (ADS)

    Nawaz, Muhammad Atif; Curtis, Andrew

    2018-04-01

    We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.

  19. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  20. Receiver deghosting in the t-x domain based on super-Gaussianity

    NASA Astrophysics Data System (ADS)

    Lu, Wenkai; Xu, Ziqiang; Fang, Zhongyu; Wang, Ruiliang; Yan, Chengzhi

    2017-01-01

    Deghosting methods in the time-space (t-x) domain have attracted a lot of attention because of their flexibility for various source/receiver configurations. Based on the well-known knowledge that the seismic signal has a super-Gaussian distribution, we present a Super-Gaussianity based Receiver Deghosting (SRD) method in the t-x domain. In our method, we denote the upgoing wave and its ghost (downgoing wave) as a single seismic signal, and express the relationship between the upgoing wave and its ghost using two ghost parameters: the sea surface reflection coefficient and the time-shift between the upgoing wave and its ghost. For a single seismic signal, we estimate these two parameters by maximizing the super-Gaussianity of the deghosted output, which is achieved by a 2D grid search method using an adaptively predefined discrete solution space. Since usually a large number of seismic signals are mixed together in a seismic trace, in the proposed method we divide the seismic trace into overlapping frames using a sliding time window with a step of one time sample, and consider each frame as a replacement for a single seismic signal. For a 2D seismic gather, we obtain two 2D maps of the ghost parameters. By assuming that these two parameters vary slowly in the t-x domain, we apply a 2D average filter to these maps, to improve their reliability further. Finally, these deghosted outputs are merged to form the final deghosted result. To demonstrate the flexibility of the proposed method for arbitrary variable depths of the receivers, we apply it to several synthetic and field seismic datasets acquired by variable depth streamer.

  1. Phase-Shifted Based Numerical Method for Modeling Frequency-Dependent Effects on Seismic Reflections

    NASA Astrophysics Data System (ADS)

    Chen, Xuehua; Qi, Yingkai; He, Xilei; He, Zhenhua; Chen, Hui

    2016-08-01

    The significant velocity dispersion and attenuation has often been observed when seismic waves propagate in fluid-saturated porous rocks. Both the magnitude and variation features of the velocity dispersion and attenuation are frequency-dependent and related closely to the physical properties of the fluid-saturated porous rocks. To explore the effects of frequency-dependent dispersion and attenuation on the seismic responses, in this work, we present a numerical method for seismic data modeling based on the diffusive and viscous wave equation (DVWE), which introduces the poroelastic theory and takes into account diffusive and viscous attenuation in diffusive-viscous-theory. We derive a phase-shift wave extrapolation algorithm in frequencywavenumber domain for implementing the DVWE-based simulation method that can handle the simultaneous lateral variations in velocity, diffusive coefficient and viscosity. Then, we design a distributary channels model in which a hydrocarbon-saturated sand reservoir is embedded in one of the channels. Next, we calculated the synthetic seismic data to analytically and comparatively illustrate the seismic frequency-dependent behaviors related to the hydrocarbon-saturated reservoir, by employing DVWE-based and conventional acoustic wave equation (AWE) based method, respectively. The results of the synthetic seismic data delineate the intrinsic energy loss, phase delay, lower instantaneous dominant frequency and narrower bandwidth due to the frequency-dependent dispersion and attenuation when seismic wave travels through the hydrocarbon-saturated reservoir. The numerical modeling method is expected to contribute to improve the understanding of the features and mechanism of the seismic frequency-dependent effects resulted from the hydrocarbon-saturated porous rocks.

  2. Evaluating the Use of Declustering for Induced Seismicity Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2016-12-01

    The recent dramatic seismicity rate increase in the central and eastern US (CEUS) has motivated the development of seismic hazard assessments for induced seismicity (e.g., Petersen et al., 2016). Standard probabilistic seismic hazard assessment (PSHA) relies fundamentally on the assumption that seismicity is Poissonian (Cornell, BSSA, 1968); therefore, the earthquake catalogs used in PSHA are typically declustered (e.g., Petersen et al., 2014) even though this may remove earthquakes that may cause damage or concern (Petersen et al., 2015; 2016). In some induced earthquake sequences in the CEUS, the standard declustering can remove up to 90% of the sequence, reducing the estimated seismicity rate by a factor of 10 compared to estimates from the complete catalog. In tectonic regions the reduction is often only about a factor of 2. We investigate how three declustering methods treat induced seismicity: the window-based Gardner-Knopoff (GK) algorithm, often used for PSHA (Gardner and Knopoff, BSSA, 1974); the link-based Reasenberg algorithm (Reasenberg, JGR,1985); and a stochastic declustering method based on a space-time Epidemic-Type Aftershock Sequence model (Ogata, JASA, 1988; Zhuang et al., JASA, 2002). We apply these methods to three catalogs that likely contain some induced seismicity. For the Guy-Greenbrier, AR earthquake swarm from 2010-2013, declustering reduces the seismicity rate by factors of 6-14, depending on the algorithm. In northern Oklahoma and southern Kansas from 2010-2015, the reduction varies from factors of 1.5-20. In the Salton Trough of southern California from 1975-2013, the rate is reduced by factors of 3-20. Stochastic declustering tends to remove the most events, followed by the GK method, while the Reasenberg method removes the fewest. Given that declustering and choice of algorithm have such a large impact on the resulting seismicity rate estimates, we suggest that more accurate hazard assessments may be found using the complete catalog.

  3. Seismic retrofit guidelines for Utah highway bridges.

    DOT National Transportation Integrated Search

    2009-05-01

    Much of Utahs population dwells in a seismically active region, and many of the bridges connecting transportation lifelines predate the rigorous seismic design standards that have been developed in the past 10-20 years. Seismic retrofitting method...

  4. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  5. New Geophysical Techniques for Offshore Exploration.

    ERIC Educational Resources Information Center

    Talwani, Manik

    1983-01-01

    New seismic techniques have been developed recently that borrow theory from academic institutions and technology from industry, allowing scientists to explore deeper into the earth with much greater precision than possible with older seismic methods. Several of these methods are discussed, including the seismic reflection common-depth-point…

  6. Attenuation and velocity dispersion in the exploration seismic frequency band

    NASA Astrophysics Data System (ADS)

    Sun, Langqiu

    In an anelastic medium, seismic waves are distorted by attenuation and velocity dispersion, which depend on petrophysical properties of reservoir rocks. The effective attenuation and velocity dispersion is a combination of intrinsic attenuation and apparent attenuation due to scattering, transmission response, and data acquisition system. Velocity dispersion is usually neglected in seismic data processing partly because of insufficient observations in the exploration seismic frequency band. This thesis investigates the methods of measuring velocity dispersion in the exploration seismic frequency band and interprets the velocity dispersion data in terms of petrophysical properties. Broadband, uncorrelated vibrator data are suitable for measuring velocity dispersion in the exploration seismic frequency band, and a broad bandwidth optimizes the observability of velocity dispersion. Four methods of measuring velocity dispersion in uncorrelated vibrator VSP data are investigated, which are the sliding window crosscorrelation (SWCC) method, the instantaneous phase method, the spectral decomposition method, and the cross spectrum method. Among them, the SWCC method is a new method and has satisfactory robustness, accuracy, and efficiency. Using the SWCC method, velocity dispersion is measured in the uncorrelated vibrator VSP data from three areas with different geological settings, i.e., Mallik gas hydrate zone, McArthur River uranium mines, and Outokumpu crystalline rocks. The observed velocity dispersion is fitted to a straight line with respect to log frequency for a constant (frequency-independent) Q value. This provides an alternative method for calculating Q. A constant Q value does not directly link to petrophysical properties. A modeling study is implemented for the Mallik and McArthur River data to interpret the velocity dispersion observations in terms of petrophysical properties. The detailed multi-parameter petrophysical reservoir models are built according to the well logs; the models' parameters are adjusted by fitting the synthetic data to the observed data. In this way, seismic attenuation and velocity dispersion provide new insight into petrophysics properties at the Mallik and McArthur River sites. Potentially, observations of attenuation and velocity dispersion in the exploration seismic frequency band can improve the deconvolution process for vibrator data, Q-compensation, near-surface analysis, and first break picking for seismic data.

  7. Development of a low cost method to estimate the seismic signature of a geothermal field from ambient seismic noise analysis, Authors: Tibuleac, I. M., J. Iovenitti, S. Pullammanapallil, D. von Seggern, F.H. Ibser, D. Shaw and H. McLahlan

    NASA Astrophysics Data System (ADS)

    Tibuleac, I. M.; Iovenitti, J. L.; Pullammanappallil, S. K.; von Seggern, D. H.; Ibser, H.; Shaw, D.; McLachlan, H.

    2015-12-01

    A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. Seismic interferometry was used to extract Green's Functions (P and surface waves) from 21 days of continuous ambient seismic noise. With the advantage of S-velocity models estimated from surface waves, an ambient noise seismic reflection survey along a line (named Line 2), although with lower resolution, reproduced the results of the active survey, when the ambient seismic noise was not contaminated by strong cultural noise. Ambient noise resolution was less at depth (below 1000m) compared to the active survey. Useful information could be recovered from ambient seismic noise, including dipping features and fault locations. Processing method tests were developed, with potential to improve the virtual reflection survey results. Through innovative signal processing techniques, periods not typically analyzed with high frequency sensors were used in this study to obtain seismic velocity model information to a depth of 1.4km. New seismic parameters such as Green's Function reflection component lateral variations, waveform entropy, stochastic parameters (Correlation Length and Hurst number) and spectral frequency content extracted from active and passive surveys showed potential to indicate geothermal favorability through their correlation with high temperature anomalies, and showed potential as fault indicators, thus reducing the uncertainty in fault identification. Geothermal favorability maps along ambient seismic Line 2 were generated considering temperature, lithology and the seismic parameters investigated in this study and compared to the active Line 2 results. Pseudo-favorability maps were also generated using only the seismic parameters analyzed in this study.

  8. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  9. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  10. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, W.; Anderson, R.N.

    1998-08-25

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.

  11. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, Wei; Anderson, Roger N.

    1998-01-01

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.

  12. Seismic analysis for translational failure of landfills with retaining walls.

    PubMed

    Feng, Shi-Jin; Gao, Li-Ya

    2010-11-01

    In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  14. An improved peak frequency shift method for Q estimation based on generalized seismic wavelet function

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Gao, Jinghuai

    2018-02-01

    As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.

  15. Poor boy 3D seismic effort yields South Central Kentucky discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanratty, M.

    1996-11-04

    Clinton County, Ky., is on the eastern flank of the Cincinnati arch and the western edge of the Appalachian basin and the Pine Mountain overthrust. Clinton County has long been known for high volume fractured carbonate wells. The discovery of these fractured reservoir, unfortunately, has historically been serendipitous. The author currently uses 2D seismic and satellite imagery to design 3D high resolution seismic shoots. This method has proven to be the most efficient and is the core of his program. The paper describes exploration methods, seismic acquisition, well data base, and seismic interpretation.

  16. Contemporary Tectonics of China

    DTIC Science & Technology

    1978-02-01

    that it would be of value to the United States to understand seismicity in China because their methods used in predicting large intraplate seismic...ability to discriminate between natural events and nuclear explosions. General Method In order to circumvent the limitations placed on studies of...accurate relative locations. Fault planes maybe determined with this method , thereby removing the ambiguity of the choice of fault plane from a fault plane

  17. Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory

    NASA Astrophysics Data System (ADS)

    Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi

    2018-03-01

    With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.

  18. Seismic instantaneous frequency extraction based on the SST-MAW

    NASA Astrophysics Data System (ADS)

    Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping

    2018-06-01

    The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.

  19. Annotated bibliography, seismicity of and near the island of Hawaii and seismic hazard analysis of the East Rift of Kilauea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, F.W.

    1994-03-28

    This bibliography is divided into the following four sections: Seismicity of Hawaii and Kilauea Volcano; Occurrence, locations and accelerations from large historical Hawaiian earthquakes; Seismic hazards of Hawaii; and Methods of seismic hazard analysis. It contains 62 references, most of which are accompanied by short abstracts.

  20. Application of seismic-refraction techniques to hydrologic studies

    USGS Publications Warehouse

    Haeni, F.P.

    1986-01-01

    During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations, and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high seismic-velocity surfaces, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits,are ideally suited for applying seismic-refraction methods. These methods allow the economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies.This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of this technique in hydrologic investigations and describes the planning, equipment, field procedures, and intrepretation techniques needed for this type of study.Examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.

  1. Application of seismic-refraction techniques to hydrologic studies

    USGS Publications Warehouse

    Haeni, F.P.

    1988-01-01

    During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high-seismic-velocity surface, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits, are ideally suited for seismic-refraction methods. These methods allow economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies. This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of these techniques in hydrologic investigations and describes the planning, equipment, field procedures, and interpretation techniques needed for this type of study. Further-more, examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.

  2. Initialising reservoir models for history matching using pre-production 3D seismic data: constraining methods and uncertainties

    NASA Astrophysics Data System (ADS)

    Niri, Mohammad Emami; Lumley, David E.

    2017-10-01

    Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.

  3. Quantitative Estimation of Seismic Velocity Changes Using Time-Lapse Seismic Data and Elastic-Wave Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Denli, H.; Huang, L.

    2008-12-01

    Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.

  4. New approach to detect seismic surface waves in 1Hz-sampled GPS time series

    PubMed Central

    Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.

    2011-01-01

    Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563

  5. Method for enhancing low frequency output of impulsive type seismic energy sources and its application to a seismic energy source for use while drilling

    DOEpatents

    Radtke, Robert P; Stokes, Robert H; Glowka, David A

    2014-12-02

    A method for operating an impulsive type seismic energy source in a firing sequence having at least two actuations for each seismic impulse to be generated by the source. The actuations have a time delay between them related to a selected energy frequency peak of the source output. One example of the method is used for generating seismic signals in a wellbore and includes discharging electric current through a spark gap disposed in the wellbore in at least one firing sequence. The sequence includes at least two actuations of the spark gap separated by an amount of time selected to cause acoustic energy resulting from the actuations to have peak amplitude at a selected frequency.

  6. Waveform Retrieval and Phase Identification for Seismic Data from the CASS Experiment

    NASA Astrophysics Data System (ADS)

    Li, Zhiwei; You, Qingyu; Ni, Sidao; Hao, Tianyao; Wang, Hongti; Zhuang, Cantao

    2013-05-01

    The little destruction to the deployment site and high repeatability of the Controlled Accurate Seismic Source (CASS) shows its potential for investigating seismic wave velocities in the Earth's crust. However, the difficulty in retrieving impulsive seismic waveforms from the CASS data and identifying the seismic phases substantially prevents its wide applications. For example, identification of the seismic phases and accurate measurement of travel times are essential for resolving the spatial distribution of seismic velocities in the crust. Until now, it still remains a challenging task to estimate the accurate travel times of different seismic phases from the CASS data which features extended wave trains, unlike processing of the waveforms from impulsive events such as earthquakes or explosive sources. In this study, we introduce a time-frequency analysis method to process the CASS data, and try to retrieve the seismic waveforms and identify the major seismic phases traveling through the crust. We adopt the Wigner-Ville Distribution (WVD) approach which has been used in signal detection and parameter estimation for linear frequency modulation (LFM) signals, and proves to feature the best time-frequency convergence capability. The Wigner-Hough transform (WHT) is applied to retrieve the impulsive waveforms from multi-component LFM signals, which comprise seismic phases with different arrival times. We processed the seismic data of the 40-ton CASS in the field experiment around the Xinfengjiang reservoir with the WVD and WHT methods. The results demonstrate that these methods are effective in waveform retrieval and phase identification, especially for high frequency seismic phases such as PmP and SmS with strong amplitudes in large epicenter distance of 80-120 km. Further studies are still needed to improve the accuracy on travel time estimation, so as to further promote applicability of the CASS for and imaging the seismic velocity structure.

  7. The theory and method of variable frequency directional seismic wave under the complex geologic conditions

    NASA Astrophysics Data System (ADS)

    Jiang, T.; Yue, Y.

    2017-12-01

    It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.

  8. Estimation of seismic attenuation in carbonate rocks using three different methods: Application on VSP data from Abu Dhabi oilfield

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2016-06-01

    In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.

  9. Picking vs Waveform based detection and location methods for induced seismicity monitoring

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.

    2017-04-01

    Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for regional and teleseismic applications, thus its performance with microseismic data might be limited. We analyze the performance of the three methodologies for a synthetic dataset with realistic noise conditions as well as for the first hour of continuous waveform data, including the Ml 3.5 St. Gallen earthquake, recorded by a microseismic network deployed in the area. We finally compare the results obtained all these three methods with a manually revised catalogue.

  10. Seismic Hazard Assessment at Esfaraen‒Bojnurd Railway, North‒East of Iran

    NASA Astrophysics Data System (ADS)

    Haerifard, S.; Jarahi, H.; Pourkermani, M.; Almasian, M.

    2018-01-01

    The objective of this study is to evaluate the seismic hazard at the Esfarayen-Bojnurd railway using the probabilistic seismic hazard assessment (PSHA) method. This method was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. Attenuation equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 1.2 × 1.2 km covering the study area, ground acceleration for every node was calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to return periods of 74, 475 and 2475 years.

  11. Site Amplification Characteristics of the Several Seismic Stations at Jeju Island, in Korea, using S-wave Energy, Background Noise, and Coda waves from the East Japan earthquake (Mar. 11th, 2011) Series.

    NASA Astrophysics Data System (ADS)

    Seong-hwa, Y.; Wee, S.; Kim, J.

    2016-12-01

    Observed ground motions are composed of 3 main factors such as seismic source, seismic wave attenuation and site amplification. Among them, site amplification is also important factor and should be considered to estimate soil-structure dynamic interaction with more reliability. Though various estimation methods are suggested, this study used the method by Castro et. al.(1997) for estimating site amplification. This method has been extended to background noise, coda waves and S waves recently for estimating site amplification. This study applied the Castro et. al.(1997)'s method to 3 different seismic waves, that is, S-wave Energy, Background Noise, and Coda waves. This study analysed much more than about 200 ground motions (acceleration type) from the East Japan earthquake (March 11th, 2011) Series of seismic stations at Jeju Island (JJU, SGP, HALB, SSP and GOS; Fig. 1), in Korea. The results showed that most of the seismic stations gave similar results among three types of seismic energies. Each station showed its own characteristics of site amplification property in low, high and specific resonance frequency ranges. Comparison of this study to other studies can give us much information about dynamic amplification of domestic sites characteristics and site classification.

  12. A comparison of methods to estimate seismic phase delays--Numerical examples for coda wave interferometry

    USGS Publications Warehouse

    Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.

    2015-01-01

    Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).

  13. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  14. Applying the seismic interferometry method to vertical seismic profile data using tunnel excavation noise as source

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos

    2013-04-01

    In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.

  15. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  16. Convolutional neural network for earthquake detection and location

    PubMed Central

    Perol, Thibaut; Gharbi, Michaël; Denolle, Marine

    2018-01-01

    The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899

  17. Interpretation of Data from Uphole Refraction Surveys

    DTIC Science & Technology

    1980-06-01

    Seismic refraction Seismic refraction method Seismic surveys Subsurface exploration ""-. 20, AI0SrRACT -(CmtuamU 00MvaO eL If naaaaamr and Identlfyby...by the presence of subsurface cavities and large cavities are identifiable, the sensitivity of the method is marginal for practical use in cavity...detection. Some cavities large enough to be of engineering signifi- cance (e.g., a tunnel of h-m diameter) may be practically undetectable by this method

  18. Career in Feet-on Seismology

    NASA Astrophysics Data System (ADS)

    Van der Lee, S.

    2011-12-01

    My career award was for imaging the upper mantle beneath North America. The research proposed was timely because of Earthscope and novel because of the proposed simultaneous inversion of different types of seismic data as well as the inclusion of mineral physics data on the effects of volatiles on seismic properties of the mantle. This research has been challenging and fun and is still on-going. The educational component of my career award consists of feet-on and eyes-open learning of seismology through an educational kiosk and field trips to actual seismic stations. The kiosk and field station have both been growing over the years, as has the audience. I started with the field station in-doors, so it doubled as the kiosk along with a palmtop terminal. Groups of minority elementary school children would look at the mysterious hardware of the "field" station and then jump up and down so they could awe at the peaks in the graph on the palmtop screen that they created. This has evolved into a three-screen kiosk, of which one screen is a touch screen along with a demonstration seismometer. The field station is now in a goat shed near the epicenter of an actual 2010 earthquake inIllinois, which is soon to be replaced by a TA station of Earthscope. The audience has grown to entire grades of middle-school children and activities have evolved from jumping to team-experimentation and the derivation of amplitude-distance relationships following a collaborative curriculum. Addressing the questions in the session description: 1) Education is more fun and effective when one can work in a team with an enthusiastic educator. 2) My education activities are strongly related to my field of expertise but very loosely related to the research carried out with the career award. It appears that not the research outcomes are of interest to students, but instead the simplification and accessibility of the process of research that is of interest. 3) The education component of the career award has made me a better and more diversified teacher and I regularly involve graduate student into these education activities.

  19. Estimating the location of baleen whale calls using dual streamers to support mitigation procedures in seismic reflection surveys.

    PubMed

    Abadi, Shima H; Tolstoy, Maya; Wilcock, William S D

    2017-01-01

    In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations.

  20. Estimating the location of baleen whale calls using dual streamers to support mitigation procedures in seismic reflection surveys

    PubMed Central

    Abadi, Shima H.; Tolstoy, Maya; Wilcock, William S. D.

    2017-01-01

    In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations. PMID:28199400

  1. Detection capability of the IMS seismic network based on ambient seismic noise measurements

    NASA Astrophysics Data System (ADS)

    Gaebler, Peter J.; Ceranna, Lars

    2016-04-01

    All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection threshold can be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.

  2. Advanced Gas Hydrate Reservoir Modeling Using Rock Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Daniel

    Prospecting for high saturation gas hydrate deposits can be greatly aided with improved approaches to seismic interpretation and especially if sets of seismic attributes can be shown as diagnostic or direct hydrocarbon indicators for high saturation gas hydrates in sands that would be of most interest for gas hydrate production. A large 3D seismic data set in the deep water Eastern Gulf of Mexico was screened for gas hydrates using a set of techniques and seismic signatures that were developed and proven in the Central deepwater Gulf of Mexico in the DOE Gulf of Mexico Joint Industry Project JIP Legmore » II in 2009 and recently confirmed with coring in 2017. A large gas hydrate deposit is interpreted in the data where gas has migrated from one of the few deep seated faults plumbing the Jurassic hydrocarbon source into the gas hydrate stability zone. The gas hydrate deposit lies within a flat-lying within Pliocene Mississippi Fan channel that was deposited outboard in a deep abyssal environment. The uniform architecture of the channel aided the evaluation of a set of seismic attributes that relate to attenuation and thin-bed energy that could be diagnostic of gas hydrates. Frequency attributes derived from spectral decomposition also proved to be direct hydrocarbon indicators by pseudo-thickness that could be only be reconciled by substituting gas hydrate in the pore space. The study emphasizes that gas hydrate exploration and reservoir characterization benefits from a seismic thin bed approach.« less

  3. Seismic hazard and risk assessment in the intraplate environment: The New Madrid seismic zone of the central United States

    USGS Publications Warehouse

    Wang, Z.

    2007-01-01

    Although the causes of large intraplate earthquakes are still not fully understood, they pose certain hazard and risk to societies. Estimating hazard and risk in these regions is difficult because of lack of earthquake records. The New Madrid seismic zone is one such region where large and rare intraplate earthquakes (M = 7.0 or greater) pose significant hazard and risk. Many different definitions of hazard and risk have been used, and the resulting estimates differ dramatically. In this paper, seismic hazard is defined as the natural phenomenon generated by earthquakes, such as ground motion, and is quantified by two parameters: a level of hazard and its occurrence frequency or mean recurrence interval; seismic risk is defined as the probability of occurrence of a specific level of seismic hazard over a certain time and is quantified by three parameters: probability, a level of hazard, and exposure time. Probabilistic seismic hazard analysis (PSHA), a commonly used method for estimating seismic hazard and risk, derives a relationship between a ground motion parameter and its return period (hazard curve). The return period is not an independent temporal parameter but a mathematical extrapolation of the recurrence interval of earthquakes and the uncertainty of ground motion. Therefore, it is difficult to understand and use PSHA. A new method is proposed and applied here for estimating seismic hazard in the New Madrid seismic zone. This method provides hazard estimates that are consistent with the state of our knowledge and can be easily applied to other intraplate regions. ?? 2007 The Geological Society of America.

  4. Design and development of digital seismic amplifier recorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samsidar, Siti Alaa; Afuar, Waldy; Handayani, Gunawan, E-mail: gunawanhandayani@gmail.com

    2015-04-16

    A digital seismic recording is a recording technique of seismic data in digital systems. This method is more convenient because it is more accurate than other methods of seismic recorders. To improve the quality of the results of seismic measurements, the signal needs to be amplified to obtain better subsurface images. The purpose of this study is to improve the accuracy of measurement by amplifying the input signal. We use seismic sensors/geophones with a frequency of 4.5 Hz. The signal is amplified by means of 12 units of non-inverting amplifier. The non-inverting amplifier using IC 741 with the resistor values 1KΩmore » and 1MΩ. The amplification results were 1,000 times. The results of signal amplification converted into digital by using the Analog Digital Converter (ADC). Quantitative analysis in this study was performed using the software Lab VIEW 8.6. The Lab VIEW 8.6 program was used to control the ADC. The results of qualitative analysis showed that the seismic conditioning can produce a large output, so that the data obtained is better than conventional data. This application can be used for geophysical methods that have low input voltage such as microtremor application.« less

  5. Calibration method helps in seismic velocity interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, C.E.; Davenport, H.A.; Wilhelm, R.

    1997-11-03

    Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less

  6. Modeling of time-lapse multi-scale seismic monitoring of CO2 injected into a fault zone to enhance the characterization of permeability in enhanced geothermal systems

    NASA Astrophysics Data System (ADS)

    Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.

    2017-12-01

    Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.

  7. Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Woo, Gordon

    2017-04-01

    For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.

  8. Time-frequency domain SNR estimation and its application in seismic data processing

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen

    2014-08-01

    Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.

  9. The Utility of the Extended Images in Ambient Seismic Wavefield Migration

    NASA Astrophysics Data System (ADS)

    Girard, A. J.; Shragge, J. C.

    2015-12-01

    Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.

  10. Numerical Modeling of 3D Seismic Wave Propagation around Yogyakarta, the Southern Part of Central Java, Indonesia, Using Spectral-Element Method on MPI-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Sudarmaji; Rudianto, Indra; Eka Nurcahya, Budi

    2018-04-01

    A strong tectonic earthquake with a magnitude of 5.9 Richter scale has been occurred in Yogyakarta and Central Java on May 26, 2006. The earthquake has caused severe damage in Yogyakarta and the southern part of Central Java, Indonesia. The understanding of seismic response of earthquake among ground shaking and the level of building damage is important. We present numerical modeling of 3D seismic wave propagation around Yogyakarta and the southern part of Central Java using spectral-element method on MPI-GPU (Graphics Processing Unit) computer cluster to observe its seismic response due to the earthquake. The homogeneous 3D realistic model is generated with detailed topography surface. The influences of free surface topography and layer discontinuity of the 3D model among the seismic response are observed. The seismic wave field is discretized using spectral-element method. The spectral-element method is solved on a mesh of hexahedral elements that is adapted to the free surface topography and the internal discontinuity of the model. To increase the data processing capabilities, the simulation is performed on a GPU cluster with implementation of MPI (Message Passing Interface).

  11. Seismic passive earth resistance using modified pseudo-dynamic method

    NASA Astrophysics Data System (ADS)

    Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.

    2017-04-01

    In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.

  12. Seismic gradiometry using ambient seismic noise in an anisotropic Earth

    NASA Astrophysics Data System (ADS)

    de Ridder, S. A. L.; Curtis, A.

    2017-05-01

    We introduce a wavefield gradiometry technique to estimate both isotropic and anisotropic local medium characteristics from short recordings of seismic signals by inverting a wave equation. The method exploits the information in the spatial gradients of a seismic wavefield that are calculated using dense deployments of seismic arrays. The application of the method uses the surface wave energy in the ambient seismic field. To estimate isotropic and anisotropic medium properties we invert an elliptically anisotropic wave equation. The spatial derivatives of the recorded wavefield are evaluated by calculating finite differences over nearby recordings, which introduces a systematic anisotropic error. A two-step approach corrects this error: finite difference stencils are first calibrated, then the output of the wave-equation inversion is corrected using the linearized impulse response to the inverted velocity anomaly. We test the procedure on ambient seismic noise recorded in a large and dense ocean bottom cable array installed over Ekofisk field. The estimated azimuthal anisotropy forms a circular geometry around the production-induced subsidence bowl. This conforms with results from studies employing controlled sources, and with interferometry correlating long records of seismic noise. Yet in this example, the results were obtained using only a few minutes of ambient seismic noise.

  13. IMS Seismic and Infrasound Stations Instrumental Challenges

    NASA Astrophysics Data System (ADS)

    Starovoit, Y. O.; Dricker, I. G.; Marty, J.

    2016-12-01

    The IMS seismic network is a set of monitoring facilities including 50 primary stations and 120 auxiliary stations. Besides the difference in the mode of data transmission to the IDC, technical specifications for seismographic equipment to be installed at both types of stations are essentially the same. The IMS infrasound network comprises 60 facilities with the requirement of continuous data transmission to IDC. The objective of this presentation is to report instrumental challenges associated with both seismic and infrasound technologies. In context of specifications for IMS seismic stations it was stressed that verification seismology is concerned with searching of reliable methods of signal detections at high frequencies. In the meantime MS/mb screening criteria between earthquakes and explosions relies on reliable detection of surface waves. The IMS seismic requirements for instrumental noise and operational range of data logger are defined as certain dB level below minimum background within the required frequency band from 0.02 to 16Hz. The type of sensors response is requested to be flat either in velocity or acceleration. The compliance with IMS specifications may thus introduce a challenging task when low-noise conditions have been recorded at the site. It means that as a station noise PSD approaches the NLNM it requires a high sensitive sensor to be connected to a quiet digitizer which may cause a quick system clip and waste of the available dynamic range. The experience has shown that hybrid frequency response of seismic sensors where combination of flat to velocity and flat to acceleration portions of the sensor frequency response may provide an optimal solution for utilization of the dynamic range and low digitizer noise floor. Vast efforts are also being undertaken and results achieved in the infrasound technology to standardize and optimize the response of the Wind-Noise Reduction System within the IMS infrasound passband from 0.02-4Hz and to deploy calibration equipment in compliance with IMS requirements. In addition to the above IMS stations need to meet specific requirements such as data authentication, central facility data buffering, precise relative timing accuracy between data samples coming from array elements as well as more than 97% of data with less than 5 min delay when transmitted to IDC.

  14. Demonstration of improved seismic source inversion method of tele-seismic body wave

    NASA Astrophysics Data System (ADS)

    Yagi, Y.; Okuwaki, R.

    2017-12-01

    Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.

  15. Estimation of the displacements among distant events based on parallel tracking of events in seismic traces under uncertainty

    NASA Astrophysics Data System (ADS)

    Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.

    2018-07-01

    The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.

  16. Study of time dynamics of seismicity for the Mexican subduction zone by means of the visibility graph method.

    NASA Astrophysics Data System (ADS)

    Ramírez-Rojas, Alejandro; Telesca, Luciano; Lovallo, Michele; Flores, Leticia

    2015-04-01

    By using the method of the visibility graph (VG), five magnitude time series extracted from the seismic catalog of the Mexican subduction zone were investigated. The five seismic sequences represent the seismicity which occurred between 2005 and 2012 in five seismic areas: Guerrero, Chiapas, Oaxaca, Jalisco and Michoacan. Among the five seismic sequences, the Jalisco sequence shows VG properties significantly different from those shown by the other four. Such a difference could be inherent in the different tectonic settings of Jalisco with respect to those characterizing the other four areas. The VG properties of the seismic sequences have been put in relationship with the more typical seismological characteristics (b-value and a-value of the Gutenberg-Richter law). The present study was supported by the Bilateral Project Italy-Mexico "Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences", jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016

  17. Integrating long-offset transient electromagnetics (LOTEM) with seismics in an exploration environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strack, K.M.; Vozoff, K.

    The applications of electromagnetics have increased in the past two decades because of an improved understanding of the methods, improves service availability, and the increased focus of exploration in the more complex reservoir characterization issues. For electromagnetic methods surface applications for hydrocarbon Exploration and Production are still a special case, while applications in borehole and airborne research and for engineering and environmental objectives are routine. In the past, electromagnetic techniques, in particular deep transient electromagnetics, made up a completely different discipline in geophysics, although many of the principles are similar to the seismic one. With an understanding of the specificmore » problems related to data processing initially and then acquisition, the inclusion of principles learned from seismics happened almost naturally. Initially, the data processing was very similar to seismic full-waveform processing. The hardware was also changed to include multichannel acquisition systems, and the field procedures became very similar to seismic surveying. As a consequence, the integration and synergism of the interpretation process is becoming almost automatic. The long-offset transient electromagnetic (LOTEM) technique will be summarized from the viewpoint of its similarity to seismics. The complete concept of the method will also be reviewed. An interpretation case history that integrates seismic and LOTEM from a hydrocarbon area in China clearly demonstrates the limitations and benefits of the method.« less

  18. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  19. Seismic risk assessment and application in the central United States

    USGS Publications Warehouse

    Wang, Z.

    2011-01-01

    Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.

  20. Virtual and super - virtual refraction method: Application to synthetic data and 2012 of Karangsambung survey data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugraha, Andri Dian; Adisatrio, Philipus Ronnie

    2013-09-09

    Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less

  1. Detecting aseismic strain transients from seismicity data

    USGS Publications Warehouse

    Llenos, A.L.; McGuire, J.J.

    2011-01-01

    Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.

  2. Alternative Energy Sources in Seismic Methods

    NASA Astrophysics Data System (ADS)

    Tün, Muammer; Pekkan, Emrah; Mutlu, Sunay; Ecevitoğlu, Berkan

    2015-04-01

    When the suitability of a settlement area is investigated, soil-amplification, liquefaction and fault-related hazards should be defined, and the associated risks should be clarified. For this reason, soil engineering parameters and subsurface geological structure of a new settlement area should be investigated. Especially, faults covered with quaternary alluvium; thicknesses, shear-wave velocities and geometry of subsurface sediments could lead to a soil amplification during an earthquake. Likewise, changes in shear-wave velocities along the basin are also very important. Geophysical methods can be used to determine the local soil properties. In this study, use of alternative seismic energy sources when implementing seismic reflection, seismic refraction and MASW methods in the residential areas of Eskisehir/Turkey, were discussed. Our home developed seismic energy source, EAPSG (Electrically-Fired-PS-Gun), capable to shoot 2x24 magnum shotgun cartridges at once to generate P and S waves; and our home developed WD-500 (500 kg Weight Drop) seismic energy source, mounted on a truck, were developed under a scientific research project of Anadolu University. We were able to reach up to penetration depths of 1200 m for EAPSG, and 800 m for WD-500 in our seismic reflection surveys. WD-500 seismic energy source was also used to perform MASW surveys, using 24-channel, 10 m apart, 4.5 Hz vertical geophone configuration. We were able to reach 100 m of penetration depth in MASW surveys.

  3. Analysing seismic-source mechanisms by linear-programming methods.

    USGS Publications Warehouse

    Julian, B.R.

    1986-01-01

    Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the 6-D space of moment-tensor components. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes. -from Author

  4. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  5. 3-D Characterization of Seismic Properties at the Smart Weapons Test Range, YPG

    DTIC Science & Technology

    2001-10-01

    confidence limits around each interpolated value. Ground truth was accomplished through cross-hole seismic measurements and borehole logs. Surface wave... seismic method, as well as estimating the optimal orientation and spacing of the seismic array . A variety of sources and receivers was evaluated...location within the array is partially related to at least two seismic lines. Either through good fortune or foresight by the designers of the SWTR site

  6. An evaluation of applicability of seismic refraction method in identifying shallow archaeological features A case study at archaeological site

    NASA Astrophysics Data System (ADS)

    Jahangardi, Morteza; Hafezi Moghaddas, Naser; Keivan Hosseini, Sayyed; Garazhian, Omran

    2015-04-01

    We applied the seismic refraction method at archaeological site, Tepe Damghani located in Sabzevar, NE of Iran, in order to determine the structures of archaeological interests. This pre-historical site has special conditions with respect to geographical location and geomorphological setting, so it is an urban archaeological site, and in recent years it has been used as an agricultural field. In spring and summer of 2012, the third season of archaeological excavation was carried out. Test trenches of excavations in this site revealed that cultural layers were often disturbed adversely due to human activities such as farming and road construction in recent years. Conditions of archaeological cultural layers in southern and eastern parts of Tepe are slightly better, for instance, in test trench 3×3 m²1S03, third test trench excavated in the southern part of Tepe, an adobe in situ architectural structure was discovered that likely belongs to cultural features of a complex with 5 graves. After conclusion of the third season of archaeological excavation, all of the test trenches were filled with the same soil of excavated test trenches. Seismic refraction method was applied with12 channels of P geophones in three lines with a geophone interval of 0.5 meter and a 1.5 meter distance between profiles on test trench 1S03. The goal of this operation was evaluation of applicability of seismic method in identification of archaeological features, especially adobe wall structures. Processing of seismic data was done with the seismic software, SiesImager. Results were presented in the form of seismic section for every profile, so that identification of adobe wall structures was achieved hardly. This could be due to that adobe wall had been built with the same materials of the natural surrounding earth. Thus, there is a low contrast and it has an inappropriate effect on seismic processing and identifying of archaeological features. Hence the result could be that application of the seismic method in order to determine the archaeological features, having the same conditions, is not affordable and efficient in comparison to GPR or magnetic methods which yield more desirable results.

  7. A comparison of Q-factor estimation methods for marine seismic data

    NASA Astrophysics Data System (ADS)

    Kwon, J.; Ha, J.; Shin, S.; Chung, W.; Lim, C.; Lee, D.

    2016-12-01

    The seismic imaging technique draws information from inside the earth using seismic reflection and transmission data. This technique is an important method in geophysical exploration. Also, it has been employed widely as a means of locating oil and gas reservoirs because it offers information on geological media. There is much recent and active research into seismic attenuation and how it determines the quality of seismic imaging. Seismic attenuation is determined by various geological characteristics, through the absorption or scattering that occurs when the seismic wave passes through a geological medium. The seismic attenuation can be defined using an attenuation coefficient and represented as a non-dimensional variable known as the Q-factor. Q-factor is a unique characteristic of a geological medium. It is a very important material property for oil and gas resource development. Q-factor can be used to infer other characteristics of a medium, such as porosity, permeability and viscosity, and can directly indicate the presence of hydrocarbons to identify oil and gas bearing areas from the seismic data. There are various ways to estimate Q-factor in three different domains. In the time domain, pulse amplitude decay, pulse rising time, and pulse broadening are representative. Logarithm spectral ratio (LSR), centroid frequency shift (CFS), and peak frequency shift (PFS) are used in the frequency domain. In the time-frequency domain, Wavelet's Envelope Peak Instantaneous Frequency (WEPIF) is most frequently employed. In this study, we estimated and analyzed the Q-factor through the numerical model test and used 4 methods: the LSR, CFS, PFS, and WEPIF. Before we applied these 4 methods to observed data, we experimented with the numerical model test. The numerical model test data is derived from Norsar-2D, which is the basis of the ray-tracing algorithm, and we used reflection and normal incidence surveys to calculate Q-factor according to the array of sources and receivers. After the numerical model test, we chose the most accurate of the 4 methods by comparing Q-factor through reflection and normal incidence surveys. We applied the method to the observed data and proved its accuracy.

  8. Aerospace technology can be applied to exploration 'back on earth'. [offshore petroleum resources

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1977-01-01

    Applications of aerospace technology to petroleum exploration are described. Attention is given to seismic reflection techniques, sea-floor mapping, remote geochemical sensing, improved drilling methods and down-hole acoustic concepts, such as down-hole seismic tomography. The seismic reflection techniques include monitoring of swept-frequency explosive or solid-propellant seismic sources, as well as aerial seismic surveys. Telemetry and processing of seismic data may also be performed through use of aerospace technology. Sea-floor sonor imaging and a computer-aided system of geologic analogies for petroleum exploration are also considered.

  9. Inverting seismic data for rock physical properties; Mathematical background and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farfour, Mohammed; Yoon, Wang Jung; Kim, Jinmo

    2016-06-08

    The basic concept behind seismic inversion is that mathematical assumptions can be established to relate seismic to geological formation properties that caused their seismic responses. In this presentation we address some widely used seismic inversion method in hydrocarbon reservoirs identification and characterization. A successful use of the inversion in real example from gas sand reservoir in Boonsville field, Noth Central Texas is presented. Seismic data was not unambiguous indicator of reservoir facies distribution. The use of the inversion led to remove the ambiguity and reveal clear information about the target.

  10. Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016

    NASA Astrophysics Data System (ADS)

    Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry

    2016-04-01

    Seismic event depth determination is critical for the event screening process at the International Data Center, CTBTO. A thorough determination of the event depth can be conducted mostly through additional special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface making the depth screening criterion not applicable. Further it may result in a heavier workload to manually distinguish between subsurface and deeper crustal events. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the depth phases, cross correlation between observed and theoretic seismograms can provide a basis for the event depth estimation, and so an expansion to the screening process. We applied this approach mostly to events at teleseismic and partially regional distances. The approach was found efficient for the seismic event screening process, with certain caveats related mostly to poorly defined source and receiver crustal models which can shift the depth estimate. An adjustable teleseismic attenuation model (t*) for synthetics was used since this characteristic is not known for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to account for the complex source topography. The software prototype is designed to be used for the Expert Technical Analysis at the IDC. With this, the design effectively reuses the NDC-in-a-Box code and can be comfortably utilized by the NDC users. The package uses Geotool as a front-end for data retrieval and pre-processing. After the event database is compiled, the control is passed to the driver software, running the external processing and plotting toolboxes, which controls the final stage and produces the final result. The modules are mostly Python coded, C-coded (Raysynth3D complex topography regional synthetics) and FORTRAN coded synthetics from the CPS330 software package by Robert Herrmann of Saint Louis University. The extension of this single station depth determination method is under development and uses joint information from all stations participating in processing. It is based on simultaneous depth and moment tensor determination for both short and long period seismic phases. A novel approach recently developed for microseismic event location utilizing only phase waveform information was migrated to a global scale. It should provide faster computation as it does not require intensive synthetic modelling, and might benefit processing noisy signals. A consistent depth estimate for all recent nuclear tests was produced for the vast number of IMS stations (primary and auxiliary) used in processing.

  11. Discrimination of porosity and fluid saturation using seismic velocity analysis

    DOEpatents

    Berryman, James G.

    2001-01-01

    The method of the invention is employed for determining the state of saturation in a subterranean formation using only seismic velocity measurements (e.g., shear and compressional wave velocity data). Seismic velocity data collected from a region of the formation of like solid material properties can provide relatively accurate partial saturation data derived from a well-defined triangle plotted in a (.rho./.mu., .lambda./.mu.)-plane. When the seismic velocity data are collected over a large region of a formation having both like and unlike materials, the method first distinguishes the like materials by initially plotting the seismic velocity data in a (.rho./.lambda., .mu./.lambda.)-plane to determine regions of the formation having like solid material properties and porosity.

  12. Development of a low cost method to estimate the seismic signature of a geothermal field form ambient noise analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tibuleac, Ileana

    2016-06-30

    A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. The material included in this report demonstrates that, with the advantage of initial S-velocity models estimated from ambient noise surface waves, the seismic reflection survey, although with lower resolution, reproduces the results of the active survey when the ambient seismic noise is not contaminated by strong cultural noise. Ambient noise resolution is less at depth (below 1000m) compared to the active survey. In general, the results are promising and useful information can be recovered from ambient seismic noise,more » including dipping features and fault locations.« less

  13. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, D. A.; Harris, D. B.

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  14. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  15. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    NASA Astrophysics Data System (ADS)

    Lestari, Titik; Nugraha, Andri Dian

    2015-04-01

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA's) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 - April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vs and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.

  16. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lestari, Titik, E-mail: t2klestari@gmail.com; Faculty of Earth Science and Technology, Bandung Institute of Technology, Jalan Ganesa No.10, Bandung 40132; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA’s) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 – April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vsmore » and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.« less

  17. A Novel Approach to Constrain Near-Surface Seismic Wave Speed Based on Polarization Analysis

    NASA Astrophysics Data System (ADS)

    Park, S.; Ishii, M.

    2016-12-01

    Understanding the seismic responses of cities around the world is essential for the risk assessment of earthquake hazards. One of the important parameters is the elastic structure of the sites, in particular, near-surface seismic wave speed, that influences the level of ground shaking. Many methods have been developed to constrain the elastic structure of the populated sites or urban basins, and here, we introduce a new technique based on analyzing the polarization content or the three-dimensional particle motion of seismic phases arriving at the sites. Polarization analysis of three-component seismic data was widely used up to about two decades ago, to detect signals and identify different types of seismic arrivals. Today, we have good understanding of the expected polarization direction and ray parameter for seismic wave arrivals that are calculated based on a reference seismic model. The polarization of a given phase is also strongly sensitive to the elastic wave speed immediately beneath the station. This allows us to compare the observed and predicted polarization directions of incoming body waves and infer the near-surface wave speed. This approach is applied to High-Sensitivity Seismograph Network in Japan, where we benchmark the results against the well-log data that are available at most stations. There is a good agreement between our estimates of seismic wave speeds and those from well logs, confirming the efficacy of the new method. In most urban environments, where well logging is not a practical option for measuring the seismic wave speeds, this method can provide a reliable, non-invasive, and computationally inexpensive estimate of near-surface elastic properties.

  18. High precision gas hydrate imaging of small-scale and high-resolution marine sparker multichannel seismic data

    NASA Astrophysics Data System (ADS)

    Luo, D.; Cai, F.

    2017-12-01

    Small-scale and high-resolution marine sparker multi-channel seismic surveys using large energy sparkers are characterized by a high dominant frequency of the seismic source, wide bandwidth, and a high resolution. The technology with a high-resolution and high-detection precision was designed to improve the imaging quality of shallow sedimentary. In the study, a 20KJ sparker and 24-channel streamer cable with a 6.25m group interval were used as a seismic source and receiver system, respectively. Key factors for seismic imaging of gas hydrate are enhancement of S/N ratio, amplitude compensation and detailed velocity analysis. However, the data in this study has some characteristics below: 1. Small maximum offsets are adverse to velocity analysis and multiple attenuation. 2. Lack of low frequency information, that is, information less than 100Hz are invisible. 3. Low S/N ratio since less coverage times (only 12 times). These characteristics make it difficult to reach the targets of seismic imaging. In the study, the target processing methods are used to improve the seismic imaging quality of gas hydrate. First, some technologies of noise suppression are combined used in pre-stack seismic data to suppression of seismic noise and improve the S/N ratio. These technologies including a spectrum sharing noise elimination method, median filtering and exogenous interference suppression method. Second, the combined method of three technologies including SRME, τ-p deconvolution and high precision Radon transformation is used to remove multiples. Third, accurate velocity field are used in amplitude energy compensation to highlight the Bottom Simulating Reflector (short for BSR, the indicator of gas hydrates) and gas migration pathways (such as gas chimneys, hot spots et al.). Fourth, fine velocity analysis technology are used to improve accuracy of velocity analysis. Fifth, pre-stack deconvolution processing technology is used to compensate for low frequency energy and suppress of ghost, thus formation reflection characteristics are highlighted. The result shows that the small-scale and high resolution marine sparker multi-channel seismic surveys are very effective in improving the resolution and quality of gas hydrate imaging than the conventional seismic acquisition technology.

  19. Final Report: Seismic Hazard Assessment at the PGDP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhinmeng

    2007-06-01

    Selecting a level of seismic hazard at the Paducah Gaseous Diffusion Plant for policy considerations and engineering design is not an easy task because it not only depends on seismic hazard, but also on seismic risk and other related environmental, social, and economic issues. Seismic hazard is the main focus. There is no question that there are seismic hazards at the Paducah Gaseous Diffusion Plant because of its proximity to several known seismic zones, particularly the New Madrid Seismic Zone. The issues in estimating seismic hazard are (1) the methods being used and (2) difficulty in characterizing the uncertainties ofmore » seismic sources, earthquake occurrence frequencies, and ground-motion attenuation relationships. This report summarizes how input data were derived, which methodologies were used, and what the hazard estimates at the Paducah Gaseous Diffusion Plant are.« less

  20. Seismic, satellite, and site observations of internal solitary waves in the NE South China Sea.

    PubMed

    Tang, Qunshu; Wang, Caixia; Wang, Dongxiao; Pawlowicz, Rich

    2014-06-20

    Internal solitary waves (ISWs) in the NE South China Sea (SCS) are tidally generated at the Luzon Strait. Their propagation, evolution, and dissipation processes involve numerous issues still poorly understood. Here, a novel method of seismic oceanography capable of capturing oceanic finescale structures is used to study ISWs in the slope region of the NE SCS. Near-simultaneous observations of two ISWs were acquired using seismic and satellite imaging, and water column measurements. The vertical and horizontal length scales of the seismic observed ISWs are around 50 m and 1-2 km, respectively. Wave phase speeds calculated from seismic observations, satellite images, and water column data are consistent with each other. Observed waveforms and vertical velocities also correspond well with those estimated using KdV theory. These results suggest that the seismic method, a new option to oceanographers, can be further applied to resolve other important issues related to ISWs.

  1. Pick- and waveform-based techniques for real-time detection of induced seismicity

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Scarabello, Luca; Böse, Maren; Weber, Bernd; Wiemer, Stefan; Clinton, John F.

    2018-05-01

    The monitoring of induced seismicity is a common operation in many industrial activities, such as conventional and non-conventional hydrocarbon production or mining and geothermal energy exploitation, to cite a few. During such operations, we generally collect very large and strongly noise-contaminated data sets that require robust and automated analysis procedures. Induced seismicity data sets are often characterized by sequences of multiple events with short interevent times or overlapping events; in these cases, pick-based location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution and incorrect magnitudes, which can have significant consequences if real-time seismicity information are used for risk assessment frameworks. To overcome these issues, different waveform-based methods for the detection and location of microseismicity have been proposed. The main advantages of waveform-based methods is that they appear to perform better and can simultaneously detect and locate seismic events providing high-quality locations in a single step, while the main disadvantage is that they are computationally expensive. Although these methods have been applied to different induced seismicity data sets, an extensive comparison with sophisticated pick-based detection methods is still missing. In this work, we introduce our improved waveform-based detector and we compare its performance with two pick-based detectors implemented within the SeiscomP3 software suite. We test the performance of these three approaches with both synthetic and real data sets related to the induced seismicity sequence at the deep geothermal project in the vicinity of the city of St. Gallen, Switzerland.

  2. Constraining shallow seismic event depth via synthetic modeling for Expert Technical Analysis at the IDC

    NASA Astrophysics Data System (ADS)

    Stachnik, J.; Rozhkov, M.; Baker, B.; Bobrov, D.; Friberg, P. A.

    2015-12-01

    Depth of event is an important criterion of seismic event screening at the International Data Center, CTBTO. However, a thorough determination of the event depth can be conducted mostly through special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface. When the true origin depth is greater than that reasonable for a nuclear test (3 km based on existing observations), this may result in a heavier workload to manually distinguish between shallow and deep events. Also, IDC depth criterion is not applicable to the events with the small t(pP-P) travel time difference, which is the case of the nuclear test. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the presence of the depth phase, cross correlation between observed and theoretic seismogram can provide an estimate for the depth of the event, and so provide an expansion to the screening process. We exercised this approach mostly with events at teleseismic and partially regional distances. We found that such approach can be very efficient for the seismic event screening process, with certain caveats related mostly to the poorly defined crustal models at source and receiver which can shift the depth estimate. We used adjustable t* teleseismic attenuation model for synthetics since this characteristic is not determined for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with Robert Herrmann's hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to the complex source topography.

  3. Geophysical Monitoring Methods Evaluation for the FutureGen 2.0 Project

    DOE PAGES

    Strickland, Chris E.; USA, Richland Washington; Vermeul, Vince R.; ...

    2014-12-31

    A comprehensive monitoring program will be needed in order to assess the effectiveness of carbon sequestration at the FutureGen 2.0 carbon capture and storage (CCS) field-site. Geophysical monitoring methods are sensitive to subsurface changes that result from injection of CO 2 and will be used for: (1) tracking the spatial extent of the free phase CO 2 plume, (2) monitoring advancement of the pressure front, (3) identifying or mapping areas where induced seismicity occurs, and (4) identifying and mapping regions of increased risk for brine or CO 2 leakage from the reservoir. Site-specific suitability and cost effectiveness were evaluated formore » a number of geophysical monitoring methods including: passive seismic monitoring, reflection seismic imaging, integrated surface deformation, time-lapse gravity, pulsed neutron capture logging, cross-borehole seismic, electrical resistivity tomography, magnetotellurics and controlled source electromagnetics. The results of this evaluation indicate that CO 2 injection monitoring using reflection seismic methods would likely be difficult at the FutureGen 2.0 site. Electrical methods also exhibited low sensitivity to the expected CO 2 saturation changes and would be affected by metallic infrastructure at the field site. Passive seismic, integrated surface deformation, time-lapse gravity, and pulsed neutron capture monitoring were selected for implementation as part of the FutureGen 2.0 storage site monitoring program.« less

  4. Geophysical Monitoring Methods Evaluation for the FutureGen 2.0 Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strickland, Chris E.; USA, Richland Washington; Vermeul, Vince R.

    A comprehensive monitoring program will be needed in order to assess the effectiveness of carbon sequestration at the FutureGen 2.0 carbon capture and storage (CCS) field-site. Geophysical monitoring methods are sensitive to subsurface changes that result from injection of CO 2 and will be used for: (1) tracking the spatial extent of the free phase CO 2 plume, (2) monitoring advancement of the pressure front, (3) identifying or mapping areas where induced seismicity occurs, and (4) identifying and mapping regions of increased risk for brine or CO 2 leakage from the reservoir. Site-specific suitability and cost effectiveness were evaluated formore » a number of geophysical monitoring methods including: passive seismic monitoring, reflection seismic imaging, integrated surface deformation, time-lapse gravity, pulsed neutron capture logging, cross-borehole seismic, electrical resistivity tomography, magnetotellurics and controlled source electromagnetics. The results of this evaluation indicate that CO 2 injection monitoring using reflection seismic methods would likely be difficult at the FutureGen 2.0 site. Electrical methods also exhibited low sensitivity to the expected CO 2 saturation changes and would be affected by metallic infrastructure at the field site. Passive seismic, integrated surface deformation, time-lapse gravity, and pulsed neutron capture monitoring were selected for implementation as part of the FutureGen 2.0 storage site monitoring program.« less

  5. Evaluating geophysical lithology determination methods in the central offshore Nile Delta, Egypt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nada, H.; Shrallow, J.

    1994-12-31

    Two post stack and one prestack geophysical techniques were used to extract lithology and fluid information from seismic data. The purpose of this work was to evaluate the effectiveness of such methods in helping to find more hydrocarbons and reduce exploration risk in Egypt`s Nile Delta. Amplitude Variations with Offset (AVO) was used as a direct hydrocarbon indicator. CDP gathers were sorted into common angle gathers. The angle traces from 0--10 degrees were stacked to form a near angle stack and those from 30--40 degrees were stacked to form a far angle stack. Comparison of the far and near anglemore » stacks indicate areas which have seismic responses that match gas bearing sand models in the Pliocene and Messinian. Seismic Sequence Attribute mapping was used to measure the reflectivity of a seismic sequence. The specific sequence attribute measured in this study was the Maximum Absolute Amplitude of the seismic reflections within a sequence. Post stack seismic inversion was used to convert zero phase final migrated data to pseudo acoustic impedance data to interpret lithology from seismic data. All three methods are useful in the Nile Delta for identifying sand prone areas, but only AVO can be used to detect fluid content.« less

  6. 40 CFR 146.95 - Class VI injection depth waiver requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...

  7. 40 CFR 146.95 - Class VI injection depth waiver requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...

  8. 40 CFR 146.95 - Class VI injection depth waiver requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...

  9. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  10. Study on vulnerability matrices of masonry buildings of mainland China

    NASA Astrophysics Data System (ADS)

    Sun, Baitao; Zhang, Guixin

    2018-04-01

    The degree and distribution of damage to buildings subjected to earthquakes is a concern of the Chinese Government and the public. Seismic damage data indicates that seismic capacities of different types of building structures in various regions throughout mainland China are different. Furthermore, the seismic capacities of the same type of structure in different regions may vary. The contributions of this research are summarized as follows: 1) Vulnerability matrices and earthquake damage matrices of masonry structures in mainland China were chosen as research samples. The aim was to analyze the differences in seismic capacities of sample matrices and to present general rules for categorizing seismic resistance. 2) Curves relating the percentage of damaged masonry structures with different seismic resistances subjected to seismic demand in different regions of seismic intensity (VI to X) have been developed. 3) A method has been proposed to build vulnerability matrices of masonry structures. The damage ratio for masonry structures under high-intensity events such as the Ms 6.1 Panzhihua earthquake in Sichuan province on 30 August 2008, was calculated to verify the applicability of this method. This research offers a significant theoretical basis for predicting seismic damage and direct loss assessment of groups of buildings, as well as for earthquake disaster insurance.

  11. a Comparative Case Study of Reflection Seismic Imaging Method

    NASA Astrophysics Data System (ADS)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  12. MASW on the standard seismic prospective scale using full spread recording

    NASA Astrophysics Data System (ADS)

    Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej

    2015-04-01

    The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.

  13. Evaluating the Reverse Time Migration Method on the dense Lapnet / Polenet seismic array in Europe

    NASA Astrophysics Data System (ADS)

    Dupont, Aurélien; Le Pichon, Alexis

    2013-04-01

    In this study, results are obtained using the reverse time migration method used as benchmark to evaluate the implemented method by Walker et al., (2010, 2011). Explosion signals recorded by the USArray and extracted from the TAIRED catalogue (TA Infrasound Reference Event Database user community / Vernon et al., 2012) are investigated. The first one is an explosion at Camp Minden, Louisiana (2012-10-16 04:25:00 UTC) and the second one is a natural gas explosion near Price, Utah (2012-11-20 15:20:00 UTC). We compare our results to automatic solutions (www.iris.edu/spud/infrasoundevent). The good agreement between both solutions validates our detection method. In a second time, we analyse data from the Lapnet / Polenet dense seismic network (Kozlovskaya et al., 2008). Detection and location in two-dimensional space and time of infrasound events presumably due to acoustic-to-seismic coupling, during the 2007-2009 period in Europe, are presented. The aim of this work is to integrate near-real time network performance predictions at regional scales to improve automatic detection of infrasonic sources. The use of dense seismic networks provides a valuable tool to monitor infrasonic phenomena, since seismic location has recently proved to be more accurate than infrasound locations due to the large number of seismic sensors.

  14. Temblor, an app focused on your seismic risk and how to reduce it

    NASA Astrophysics Data System (ADS)

    Stein, R. S.; Sevilgen, V.; Sevilgen, S.; Kim, A.; Madden, E.

    2015-12-01

    Half of the world's population lives near active faults, and so could suffer earthquake damage. Most do not know they are at risk; many of the rest do too little, too late. So, Temblor is intended to enable everyone in the United States, and eventually the world, to learn their seismic hazard, to determine what most ensures their safety, and to determine the risk reduction measures in their best financial interest. In our free web and mobile app, and Chrome extension for real estate websites, Temblor estimates the likelihood of seismic shaking from all quakes at their occurrence rates, and the consequences of the shaking for home damage. The app then shows how the damage or its costs could be decreased by buying or renting a seismically safer home, securing fragile objects inside your home, retrofitting an older home, or buying earthquake insurance. Temblor uses public data from the USGS in the U.S., SHARE in Europe, and the GEAR model (Bird et al, in press, BSSA) for the globe. Through publicly available modeling methods, the hazard data is combined with public data on homes (construction date and square footage) to make risk calculations. This means that Temblor's results are independently reproducible. The app makes many simplifying assumptions, but users can provide additional information on their site and home for refined estimates. Temblor also lets one see active faults and recent quakes on the screen as they drive through an area. Because fear tends to trigger either panic or denial, Temblor seeks to make the world of earthquakes more fascinating than frightening. We are neither scaring nor soothing people, but rather talking straight. Through maps, globes, push notifications, family connections, and costs and benefit estimates, Temblor emphasizes the personal, local, realtime, and most importantly, rational. Temblor's goal is to distill scientific and engineering information into lucid, trusted, and ideally actionable guidance to renters, home owners, and home buyers, so that we all live more safely in earthquake country.

  15. An examination of the earthquake behaviour of a retaining wall considering soil-structure interaction

    NASA Astrophysics Data System (ADS)

    Köktan, Utku; Demir, Gökhan; Kerem Ertek, M.

    2017-04-01

    The earthquake behavior of retaining walls is commonly calculated with pseudo static approaches based on Mononobe-Okabe method. The seismic ground pressure acting on the retaining wall by the Mononobe-Okabe method does not give a definite idea of the distribution of the seismic ground pressure because it is obtained by balancing the forces acting on the active wedge behind the wall. With this method, wave propagation effects and soil-structure interaction are neglected. The purpose of this study is to examine the earthquake behavior of a retaining wall taking into account the soil-structure interaction. For this purpose, time history seismic analysis of the soil-structure interaction system using finite element method has been carried out considering 3 different soil conditions. Seismic analysis of the soil-structure model was performed according to the earthquake record of "1971, San Fernando Pacoima Dam, 196 degree" existing in the library of MIDAS GTS NX software. The results obtained from the analyses show that the soil-structure interaction is very important for the seismic design of a retaining wall. Keywords: Soil-structure interaction, Finite element model, Retaining wall

  16. Post-injection feasibility study with the reflectivity method for the Ketzin pilot site, Germany (CO2 storage in a saline aquifer)

    NASA Astrophysics Data System (ADS)

    Ivanova, Alexandra; Kempka, Thomas; Huang, Fei; Diersch [Gil], Magdalena; Lüth, Stefan

    2016-04-01

    3D time-lapse seismic surveys (4D seismic) have proven to be a suitable technique for monitoring of injected CO2, because when CO2 replaces brine as a free gas it considerably affects elastic properties of porous media. Forward modeling of a 4D seismic response to the CO2-fluid substitution in a storage reservoir is an inevitable step in such studies. At the Ketzin pilot site (CO2 storage) 67 kilotons of CO2 were injected into a saline aquifer between 2008 and 2013. In order to track migration of CO2 at Ketzin, 3D time-lapse seismic data were acquired by means of a baseline pre-injection survey in 2005 and 3 monitor surveys: in 2009, 2012 and in 2015 (the 1st post-injection survey). Results of the 4D seismic forward modeling with the reflectivity method suggest that effects of the injected CO2 on the 4D seismic data at Ketzin are significant regarding both seismic amplitudes and time delays. These results prove the corresponding observations in the real 4D seismic data at the Ketzin pilot site. But reservoir heterogeneity and seismic resolution, as well as random and coherent seismic noise are negative factors to be considered in this interpretation. Results of the 4D seismic forward modeling with the reflectivity method support the conclusion that even small amounts of injected CO2 can be monitored in such post-injected saline aquifer as the CO2 storage reservoir at the Ketzin pilot site both qualitatively and quantitatively with considerable uncertainties (Lüth et al., 2015). Reference: Lueth, S., Ivanova, A., Kempka, T. (2015): Conformity assessment of monitoring and simulation of CO2 storage: A case study from the Ketzin pilot site. - International Journal of Greenhouse Gas Control, 42, p. 329-339.

  17. The Global Detection Capability of the IMS Seismic Network in 2013 Inferred from Ambient Seismic Noise Measurements

    NASA Astrophysics Data System (ADS)

    Gaebler, P. J.; Ceranna, L.

    2016-12-01

    All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection thresholdcan be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.

  18. Focal mechanism determination for induced seismicity using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong

    2018-06-01

    Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.

  19. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  20. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    NASA Astrophysics Data System (ADS)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  1. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration ofmore » the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.« less

  2. Seismic performance evaluation of RC frame-shear wall structures using nonlinear analysis methods

    NASA Astrophysics Data System (ADS)

    Shi, Jialiang; Wang, Qiuwei

    To further understand the seismic performance of reinforced concrete (RC) frame-shear wall structures, a 1/8 model structure is scaled from a main factory structure with seven stories and seven bays. The model with four-stories and two-bays was pseudo-dynamically tested under six earthquake actions whose peak ground accelerations (PGA) vary from 50gal to 400gal. The damage process and failure patterns were investigated. Furthermore, nonlinear dynamic analysis (NDA) and capacity spectrum method (CSM) were adopted to evaluate the seismic behavior of the model structure. The top displacement curve, story drift curve and distribution of hinges were obtained and discussed. It is shown that the model structure had the characteristics of beam-hinge failure mechanism. The two methods can be used to evaluate the seismic behavior of RC frame-shear wall structures well. What’s more, the NDA can be somewhat replaced by CSM for the seismic performance evaluation of RC structures.

  3. Research on the spatial analysis method of seismic hazard for island

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying

    2017-05-01

    Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.

  4. Parameter Prediction of Hydraulic Fracture for Tight Reservoir Based on Micro-Seismic and History Matching

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun

    Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.

  5. Microseismicity of Blawan hydrothermal complex, Bondowoso, East Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Maryanto, S.

    2018-03-01

    Peak Ground Acceleration (PGA), hypocentre, and epicentre of Blawan hydrothermal complex have been analysed in order to investigate its seismicity. PGA has been determined based on Fukushima-Tanaka method and the source location of microseismic estimated using particle motion method. PGA ranged between 0.095-0.323 g and tends to be higher in the formation that containing not compacted rocks. The seismic vulnerability index region indicated that the zone with high PGA also has a high seismic vulnerability index. This was because the rocks making up these zones were inclined soft and low-density rocks. For seismic sources around the area, epicentre and hypocentre, have estimated base on seismic particle motion method of single station. The stations used in this study were mobile stations identified as BL01, BL02, BL03, BL05, BL06, BL07 and BL08. The results of the analysis particle motion obtained 44 points epicentre and the depth of the sources about 15 – 110 meters below ground surface.

  6. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Chen, Ting; Tan, Sirui

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less

  7. Passive Seismic for Hydrocarbon Indicator : Between Expectation and Reality

    NASA Astrophysics Data System (ADS)

    Pandito, Riky H. B.

    2018-03-01

    In between 5 – 10 years, in our country, passive seismic method became more popular to finding hydrocarbon. Low price, nondestructive acquisition and easy to mobilization is the best reason for choose the method. But in the other part, some people are pessimistically to deal with the result. Instrument specification, data condition and processing methods is several points which influence characteristic and interpretation passive seismic result. In 2010 one prospect in East Java Basin has been measurement constist of 112 objective points and several calibration points. Data measurement results indicate a positive response. Furthermore, in 2013 exploration drliing conducted on the prospect. Drill steam test showes 22 MMCFD in objective zone, upper – late oligocene. In 2015, remeasurement taken in objective area and show consistent responses with previous measurement. Passive seismic is unique method, sometimes will have difference results on dry, gas and oil area, in field production and also temporary suspend area with hidrocarbon content.

  8. INVESTIGATING THE EFFECT OF MICROBIAL GROWTH AND BIOFILM FORMATION ON SEISMIC WAVE PROPAGATION IN SEDIMENT

    EPA Science Inventory

    Previous laboratory investigations have demonstrated that the seismic methods are sensitive to microbially-induced changes in porous media through the generation of biogenic gases and biomineralization. The seismic signatures associated with microbial growth and biofilm formation...

  9. Simultaneous multi-component seismic denoising and reconstruction via K-SVD

    NASA Astrophysics Data System (ADS)

    Hou, Sian; Zhang, Feng; Li, Xiangyang; Zhao, Qiang; Dai, Hengchang

    2018-06-01

    Data denoising and reconstruction play an increasingly significant role in seismic prospecting for their value in enhancing effective signals, dealing with surface obstacles and reducing acquisition costs. In this paper, we propose a novel method to denoise and reconstruct multicomponent seismic data simultaneously. This method lies within the framework of machine learning and the key points are defining a suitable weight function and a modified inner product operator. The purpose of these two processes are to perform missing data machine learning when the random noise deviation is unknown, and building a mathematical relationship for each component to incorporate all the information of multi-component data. Two examples, using synthetic and real multicomponent data, demonstrate that the new method is a feasible alternative for multi-component seismic data processing.

  10. Continuous Seismic Threshold Monitoring

    DTIC Science & Technology

    1992-05-31

    Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic

  11. Seismic reflection response from cross-correlations of ambient vibrations on non-conventional hidrocarbon reservoir

    NASA Astrophysics Data System (ADS)

    Huerta, F. V.; Granados, I.; Aguirre, J.; Carrera, R. Á.

    2017-12-01

    Nowadays, in hydrocarbon industry, there is a need to optimize and reduce exploration costs in the different types of reservoirs, motivating the community specialized in the search and development of alternative exploration geophysical methods. This study show the reflection response obtained from a shale gas / oil deposit through the method of seismic interferometry of ambient vibrations in combination with Wavelet analysis and conventional seismic reflection techniques (CMP & NMO). The method is to generate seismic responses from virtual sources through the process of cross-correlation of records of Ambient Seismic Vibrations (ASV), collected in different receivers. The seismic response obtained is interpreted as the response that would be measured in one of the receivers considering a virtual source in the other. The acquisition of ASV records was performed in northern of Mexico through semi-rectangular arrays of multi-component geophones with instrumental response of 10 Hz. The in-line distance between geophones was 40 m while in cross-line was 280 m, the sampling used during the data collection was 2 ms and the total duration of the records was 6 hours. The results show the reflection response of two lines in the in-line direction and two in the cross-line direction for which the continuity of coherent events have been identified and interpreted as reflectors. There is certainty that the events identified correspond to reflections because the time-frequency analysis performed with the Wavelet Transform has allowed to identify the frequency band in which there are body waves. On the other hand, the CMP and NMO techniques have allowed to emphasize and correct the reflection response obtained during the correlation processes in the frequency band of interest. The results of the processing and analysis of ASV records through the seismic interferometry method have allowed us to see interesting results in light of the cross-correlation process in combination with the Wavelet analysis and conventional seismic reflection techniques. Therefore it was possible to recover the seismic response on each analyzed source-receiver pair, allowing us to obtain the reflection response of each analyzed seismic line.

  12. Visualization of volumetric seismic data

    NASA Astrophysics Data System (ADS)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  13. Continuous micro-earthquake catalogue of the central Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Michailos, Konstantinos; Townend, John; Savage, Martha; Chamberlain, Calum

    2017-04-01

    The Alpine Fault is one of the most prominent tectonic features in the South Island, New Zealand, and is inferred to be late in its seismic cycle of M 8 earthquakes based on paleoseismological evidence. Despite this, the Alpine Fault displays low levels of contemporary seismic activity, with little documented on-fault seismicity. This low magnitude seismicity, often below the completeness level of the GeoNet national seismic catalogue, may inform us of changes in fault character along-strike and might be used for rupture simulations and hazard planning. Thus, compiling a micro-earthquake catalogue for the Southern Alps prior to an expected major earthquake is of great interest. Areas of low seismic activity, like the central part of the Alpine Fault, require data recorded over a long duration to reveal temporal and spatial seismicity patterns and provide a better understanding for the processes controlling seismogenesis. The continuity and density of the Southern Alps Microearthquake Borehole Array (SAMBA; deployed in late 2008) allows us to study seismicity in the Southern Alps over a more extended time period than has ever been done previously. Furthermore, by using data from other temporary networks (e.g. WIZARD, ALFA08, DFDP-10) we are able to extend the region covered. To generate a spatially and temporally continuous catalogue of seismicity in New Zealand's central Southern Alps, we used automatic detection and phase-picking methods. We used an automatic phase-picking method for both P- and S- wave arrivals (kPick; Rawles and Thurber, 2015). Using almost 8 years of seismic data we calculated about 9,000 preliminary earthquake. The seismicity is clustered and scattered and a previously observed seismic gap between the Wanganui and Whataroa rivers is also identified.

  14. Microseismic monitoring of soft-rock landslide: contribution of a 3D velocity model for the location of seismic sources.

    NASA Astrophysics Data System (ADS)

    Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter

    2015-04-01

    Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.

  15. Signal Quality and the Reliability of Seismic Observations

    NASA Astrophysics Data System (ADS)

    Zeiler, C. P.; Velasco, A. A.; Pingitore, N. E.

    2009-12-01

    The ability to detect, time and measure seismic phases depends on the location, size, and quality of the recorded signals. Additional constraints are an analyst’s familiarity with a seismogenic zone and with the seismic stations that record the energy. Quantification and qualification of an analyst’s ability to detect, time and measure seismic signals has not been calculated or fully assessed. The fundamental measurement for computing the accuracy of a seismic measurement is the signal quality. Several methods have been proposed to measure signal quality; however, the signal-to-noise ratio (SNR) has been adopted as a short-term average over the long-term average. While the standard SNR is an easy and computationally inexpensive term, the overall statistical significance has not been computed for seismic measurement analysis. The prospect of canonizing the process of cataloging seismic arrivals hinges on the ability to repeat measurements made by different methods and analysts. The first step in canonizing phase measurements has been done by the IASPEI, which established a reference for accepted practices in naming seismic phases. The New Manual for Seismological Observatory Practices (NMSOP, 2002) outlines key observations for seismic phases recorded at different distances and proposes to quantify timing uncertainty with a user-specified windowing technique. However, this added measurement would not completely remove bias introduced by different techniques used by analysts to time seismic arrivals. The general guideline to time a seismic arrival is to record the time where a noted change in frequency and/or amplitude begins. This is generally achieved by enhancing the arrivals through filtering or beam forming. However, these enhancements can alter the characteristics of the arrival and how the arrival will be measured. Furthermore, each enhancement has user-specified parameters that can vary between analysts and this results in reduced ability to repeat measurements between analysts. The SPEAR project (Zeiler and Velasco, 2009) has started to explore the effects of comparing measurements from the same seismograms. Initial results showed that experience and the signal quality are the leading contributors to pick differences. However, the traditional SNR method of measuring signal quality was replaced by a Wide-band Spectral Ratio (WSR) due to a decrease in scatter. This observation brings up an important question of what is the best way to measure signal quality. We compare various methods (traditional SNR, WSR, power spectral density plots, Allan Variance) that have been proposed to measure signal quality and discuss which method provides the best tool to compare arrival time uncertainty.

  16. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  17. Extracting physical parameters from marine seismic data: New methods in seismic oceanography and velocity inversion

    NASA Astrophysics Data System (ADS)

    Fortin, Will F. J.

    The utility and meaning of a geophysical dataset is dependent on good interpretation informed by high-quality data, processing, and attribute examination via technical methodologies. Active source marine seismic reflection data contains a great deal of information in the location, phase, and amplitude of both pre- and post-stack seismic reflections. Using pre- and post-stack data, this work has extracted useful information from marine reflection seismic data in novel ways in both the oceanic water column and the sub-seafloor geology. In chapter 1 we develop a new method for estimating oceanic turbulence from a seismic image. This method is tested on synthetic seismic data to show the method's ability to accurately recover both distribution and levels of turbulent diffusivity. Then we apply the method to real data offshore Costa Rica where we observe lee waves. Our results find elevated diffusivities near the seafloor as well as above the lee waves five times greater than surrounding waters and 50 times greater than open ocean diffusivities. Chapter 2 investigates subsurface geology in the Cascadia Subduction Zone and outlines a workflow for using pre-stack waveform inversion to produce highly detailed velocity models and seismic images. Using a newly developed inversion code, we achieve better imaging results as compared to the product of a standard, user-intensive method for building a velocity model. Our results image the subduction interface ~30 km farther landward than previous work and better images faults and sedimentary structures above the oceanic plate as well as in the accretionary prism. The resultant velocity model is highly detailed, inverted every 6.25 m with ~20 m vertical resolution, and will be used to examine the role of fluids in the subduction system. These results help us to better understand the natural hazards risks associated with the Cascadia Subduction Zone. Chapter 3 returns to seismic oceanography and examines the dynamics of nonlinear internal wave pulses in the South China Sea. Coupling observations from the seismic images with turbulent patterns, we find no evidence for hydraulic jumps in the Luzon passage. Our data suggests geometric resonance may be the underlying physics behind large amplitude nonlinear internal wave pulses seen in the region. We find increased levels of turbulent diffusivity in deep water below 1000 m, associated with internal tide pulses, and near the steep slopes of both the Heng-Chun and Lan-Yu ridges.

  18. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    NASA Astrophysics Data System (ADS)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.

  19. Estimation of bedrock depth using the horizontal‐to‐vertical (H/V) ambient‐noise seismic method

    USGS Publications Warehouse

    Lane, John W.; White, Eric A.; Steele, Gregory V.; Cannia, James C.

    2008-01-01

    Estimating sediment thickness and the geometry of the bedrock surface is a key component of many hydrogeologic studies. The horizontal‐to‐vertical (H/V) ambient‐noise seismic method is a novel, non‐invasive technique that can be used to rapidly estimate the depth to bedrock. The H/V method uses a single, broad‐band three‐component seismometer to record ambient seismic noise. The ratio of the averaged horizontal‐to‐vertical frequency spectrum is used to determine the fundamental site resonance frequency, which can be interpreted using regression equations to estimate sediment thickness and depth to bedrock. The U.S. Geological Survey used the H/V seismic method during fall 2007 at 11 sites in Cape Cod, Massachusetts, and 13 sites in eastern Nebraska. In Cape Cod, H/V measurements were acquired along a 60‐kilometer (km) transect between Chatham and Provincetown, where glacial sediments overlie metamorphic rock. In Nebraska, H/V measurements were acquired along approximately 11‐ and 14‐km transects near Firth and Oakland, respectively, where glacial sediments overlie weathered sedimentary rock. The ambient‐noise seismic data from Cape Cod produced clear, easily identified resonance frequency peaks. The interpreted depth and geometry of the bedrock surface correlate well with boring data and previously published seismic refraction surveys. Conversely, the ambient‐noise seismic data from eastern Nebraska produced subtle resonance frequency peaks, and correlation of the interpreted bedrock surface with bedrock depths from borings is poor, which may indicate a low acoustic impedance contrast between the weathered sedimentary rock and overlying sediments and/or the effect of wind noise on the seismic records. Our results indicate the H/V ambient‐noise seismic method can be used effectively to estimate the depth to rock where there is a significant acoustic impedance contrast between the sediments and underlying rock. However, effective use of the method is challenging in the presence of gradational contacts such as gradational weathering or cementation. Further work is needed to optimize interpretation of resonance frequencies in the presence of extreme wind noise. In addition, local estimates of bedrock depth likely could be improved through development of regional or study‐area‐specific regression equations relating resonance frequency to bedrock depth.

  20. Near‐surface evaluation of Ball Mountain Dam, Vermont, using multi‐channel analysis of surface waves (MASW) and refraction tomography seismic methods on land‐streamer data

    USGS Publications Warehouse

    Ivanov, Julian M.; Johnson, Carole D.; Lane, John W.; Miller, Richard D.; Clemens, Drew

    2009-01-01

    A limited seismic investigation of Ball Mountain Dam, an earthen dam near Jamaica, Vermont, was conducted using multiple seismic methods including multi‐channel analysis of surface waves (MASW), refraction tomography, and vertical seismic profiling (VSP). The refraction and MASW data were efficiently collected in one survey using a towed land streamer containing vertical‐displacement geophones and two seismic sources, a 9‐kg hammer at the beginning of the spread and a 40‐kg accelerated weight drop one spread length from the geophones, to obtain near‐ and far‐offset data sets. The quality of the seismic data for the purposes of both refraction and MASW analyses was good for near offsets, decreasing in quality at farther offsets, thus limiting the depth of investigation to about 12 m. Refraction tomography and MASW analyses provided 2D compressional (Vp) and shear‐wave (Vs) velocity sections along the dam crest and access road, which are consistent with the corresponding VSP seismic velocity estimates from nearby wells. The velocity sections helped identify zonal variations in both Vp and Vs (rigidity) properties, indicative of material heterogeneity or dynamic processes (e.g. differential settlement) at specific areas of the dam. The results indicate that refraction tomography and MASW methods are tools with significant potential for economical, non‐invasive characterization of construction materials at earthen dam sites.

  1. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  2. Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry

    NASA Astrophysics Data System (ADS)

    Nishitsuji, Yohei; Rowe, C. A.; Wapenaar, Kees; Draganov, Deyan

    2016-04-01

    The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection responses (and thus images) beneath the Apollo stations on the nearside of the Moon from virtual sources colocated with the stations. This method is called deep-moonquake seismic interferometry (DMSI). Our results show a laterally coherent acoustic boundary around 50 km depth beneath all four Apollo stations. We interpret this boundary as the lunar seismic Moho. This depth agrees with Japan Aerospace Exploration Agency's (JAXA) SELenological and Engineering Explorer (SELENE) result and previous travel time analysis at the Apollo 12/14 sites. The deeper part of the image we obtain from DMSI shows laterally incoherent structures. Such lateral inhomogeneity we interpret as representing a zone characterized by strong scattering and constant apparent seismic velocity at our resolution scale (0.2-2.0 Hz).

  3. Seismic, satellite, and site observations of internal solitary waves in the NE South China Sea

    PubMed Central

    Tang, Qunshu; Wang, Caixia; Wang, Dongxiao; Pawlowicz, Rich

    2014-01-01

    Internal solitary waves (ISWs) in the NE South China Sea (SCS) are tidally generated at the Luzon Strait. Their propagation, evolution, and dissipation processes involve numerous issues still poorly understood. Here, a novel method of seismic oceanography capable of capturing oceanic finescale structures is used to study ISWs in the slope region of the NE SCS. Near-simultaneous observations of two ISWs were acquired using seismic and satellite imaging, and water column measurements. The vertical and horizontal length scales of the seismic observed ISWs are around 50 m and 1–2 km, respectively. Wave phase speeds calculated from seismic observations, satellite images, and water column data are consistent with each other. Observed waveforms and vertical velocities also correspond well with those estimated using KdV theory. These results suggest that the seismic method, a new option to oceanographers, can be further applied to resolve other important issues related to ISWs. PMID:24948180

  4. Fluid-structure interaction dynamic simulation of spring-loaded pressure relief valves under seismic wave

    NASA Astrophysics Data System (ADS)

    Lv, Dongwei; Zhang, Jian; Yu, Xinhai

    2018-05-01

    In this paper, a fluid-structure interaction dynamic simulation method of spring-loaded pressure relief valve was established. The dynamic performances of the fluid regions and the stress and strain of the structure regions were calculated at the same time by accurately setting up the contact pairs between the solid parts and the coupling surfaces between the fluid regions and the structure regions. A two way fluid-structure interaction dynamic simulation of a simplified pressure relief valve model was carried out. The influence of vertical sinusoidal seismic waves on the performance of the pressure relief valve was preliminarily investigated by loading sine waves. Under vertical seismic waves, the pressure relief valve will flutter, and the reseating pressure was affected by the amplitude and frequency of the seismic waves. This simulation method of the pressure relief valve under vertical seismic waves can provide effective means for investigating the seismic performances of the valves, and make up for the shortcomings of the experiment.

  5. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  6. A first step to compare geodynamical models and seismic observations of the inner core

    NASA Astrophysics Data System (ADS)

    Lasbleis, M.; Waszek, L.; Day, E. A.

    2016-12-01

    Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim to release our codes to broader scientific community, allowing researchers from all disciplines to test their models of inner core growth against seismic observations or create a kinematic model for the evolution of the inner core which matches new geophysical observations.

  7. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm

    NASA Astrophysics Data System (ADS)

    Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.

    2017-12-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years of continuous seismic record by the Alaskan permanent seismic network and Hi-Climb trans-Himalayan seismic network. The processing chain we developed also opens the possibility for a near-real time seismic detection of landslides, in association with remote-sensing automated detection from Sentinel 2 images for example.

  8. Seismic Waves in Rocks with Fluids and Fractures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J G

    2006-02-06

    Seismic wave propagation through the earth is often strongly affected by the presence of fractures. When these fractures are filled with fluids (oil, gas, water, CO{sub 2}, etc.), the type and state of the fluid (liquid or gas) can make a large difference in the response of the seismic waves. This paper will summarize some early work of the author on methods of deconstructing the effects of fractures, and any fluids within these fractures, on seismic wave propagation as observed in reflection seismic data. Methods to be explored here include Thomsen's anisotropy parameters for wave moveout (since fractures often inducemore » elastic anisotropy), and some very convenient fracture parameters introduced by Sayers and Kachanov that permit a relatively simple deconstruction of the elastic behavior in terms of fracture parameters (whenever this is appropriate).« less

  9. Investigating the Capability to Extract Impulse Response Functions From Ambient Seismic Noise Using a Mine Collapse Event

    NASA Astrophysics Data System (ADS)

    Kwak, Sangmin; Song, Seok Goo; Kim, Geunyoung; Cho, Chang Soo; Shin, Jin Soo

    2017-10-01

    Using recordings of a mine collapse event (Mw 4.2) in South Korea in January 2015, we demonstrated that the phase and amplitude information of impulse response functions (IRFs) can be effectively retrieved using seismic interferometry. This event is equivalent to a single downward force at shallow depth. Using quantitative metrics, we compared three different seismic interferometry techniques—deconvolution, coherency, and cross correlation—to extract the IRFs between two distant stations with ambient seismic noise data. The azimuthal dependency of the source distribution of the ambient noise was also evaluated. We found that deconvolution is the best method for extracting IRFs from ambient seismic noise within the period band of 2-10 s. The coherency method is also effective if appropriate spectral normalization or whitening schemes are applied during the data processing.

  10. Ray Tracing Methods in Seismic Emission Tomography

    NASA Astrophysics Data System (ADS)

    Chebotareva, I. Ya.

    2018-03-01

    Highly efficient approximate ray tracing techniques which can be used in seismic emission tomography and in other methods requiring a large number of raypaths are described. The techniques are applicable for the gradient and plane-layered velocity sections of the medium and for the models with a complicated geometry of contrasting boundaries. The empirical results obtained with the use of the discussed ray tracing technologies and seismic emission tomography results, as well as the results of numerical modeling, are presented.

  11. Interactive 3D visualization speeds well, reservoir planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less

  12. Military applications and examples of near-surface seismic surface wave methods (Invited)

    NASA Astrophysics Data System (ADS)

    sloan, S.; Stevens, R.

    2013-12-01

    Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.

  13. High lateral resolution exploration using surface waves from noise records

    NASA Astrophysics Data System (ADS)

    Chávez-García, Francisco José Yokoi, Toshiaki

    2016-04-01

    Determination of the shear-wave velocity structure at shallow depths is a constant necessity in engineering or environmental projects. Given the sensitivity of Rayleigh waves to shear-wave velocity, subsoil structure exploration using surface waves is frequently used. Methods such as the spectral analysis of surface waves (SASW) or multi-channel analysis of surface waves (MASW) determine phase velocity dispersion from surface waves generated by an active source recorded on a line of geophones. Using MASW, it is important that the receiver array be as long as possible to increase the precision at low frequencies. However, this implies that possible lateral variations are discarded. Hayashi and Suzuki (2004) proposed a different way of stacking shot gathers to increase lateral resolution. They combined strategies used in MASW with the common mid-point (CMP) summation currently used in reflection seismology. In their common mid-point with cross-correlation method (CMPCC), they cross-correlate traces sharing CMP locations before determining phase velocity dispersion. Another recent approach to subsoil structure exploration is based on seismic interferometry. It has been shown that cross-correlation of a diffuse field, such as seismic noise, allows the estimation of the Green's Function between two receivers. Thus, a virtual-source seismic section may be constructed from the cross-correlation of seismic noise records obtained in a line of receivers. In this paper, we use the seismic interferometry method to process seismic noise records obtained in seismic refraction lines of 24 geophones, and analyse the results using CMPCC to increase the lateral resolution of the results. Cross-correlation of the noise records allows reconstructing seismic sections with virtual sources at each receiver location. The Rayleigh wave component of the Green's Functions is obtained with a high signal-to-noise ratio. Using CMPCC analysis of the virtual-source seismic lines, we are able to identify lateral variations of phase velocity inside the seismic line, and increase the lateral resolution compared with results of conventional analysis.

  14. Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-12-01

    Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.

  15. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  16. Application of the Radon-FCL approach to seismic random noise suppression and signal preservation

    NASA Astrophysics Data System (ADS)

    Meng, Fanlei; Li, Yue; Liu, Yanping; Tian, Yanan; Wu, Ning

    2016-08-01

    The fractal conservation law (FCL) is a linear partial differential equation that is modified by an anti-diffusive term of lower order. The analysis indicated that this algorithm could eliminate high frequencies and preserve or amplify low/medium-frequencies. Thus, this method is quite suitable for the simultaneous noise suppression and enhancement or preservation of seismic signals. However, the conventional FCL filters seismic data only along the time direction, thereby ignoring the spatial coherence between neighbouring traces, which leads to the loss of directional information. Therefore, we consider the development of the conventional FCL into the time-space domain and propose a Radon-FCL approach. We applied a Radon transform to implement the FCL method in this article; performing FCL filtering in the Radon domain achieves a higher level of noise attenuation. Using this method, seismic reflection events can be recovered with the sacrifice of fewer frequency components while effectively attenuating more random noise than conventional FCL filtering. Experiments using both synthetic and common shot point data demonstrate the advantages of the Radon-FCL approach versus the conventional FCL method with regard to both random noise attenuation and seismic signal preservation.

  17. Seismic Hazard Analysis — Quo vadis?

    NASA Astrophysics Data System (ADS)

    Klügel, Jens-Uwe

    2008-05-01

    The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.

  18. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  19. ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Perol, T.; Gharbi, M.

    2017-12-01

    Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.

  20. Seismic refraction analysis: the path forward

    USGS Publications Warehouse

    Haines, Seth S.; Zelt, Colin; Doll, William

    2012-01-01

    Seismic Refraction Methods: Unleashing the Potential and Understanding the Limitations; Tucson, Arizona, 29 March 2012 A workshop focused on seismic refraction methods took place on 29 May 2012, associated with the 2012 Symposium on the Application of Geophysics to Engineering and Environmental Problems. This workshop was convened to assess the current state of the science and discuss paths forward, with a primary focus on near-surface problems but with an eye on all applications. The agenda included talks on these topics from a number of experts interspersed with discussion and a dedicated discussion period to finish the day. Discussion proved lively at times, and workshop participants delved into many topics central to seismic refraction work.

  1. Analysis and Modeling of Shear Waves Generated by Explosions at the San Andreas Fault Observatory at Depth

    DTIC Science & Technology

    2011-09-01

    No. BAA09-69 ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km...NNSA). 14. ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km depth...generated using the direct Green’s function (DGF) method of Friederich and Dalkolmo (1995). This method synthesizes the seismic wavefield for a spherically

  2. Numerical simulation of bubble plumes and an analysis of their seismic attributes

    NASA Astrophysics Data System (ADS)

    Li, Canping; Gou, Limin; You, Jiachun

    2017-04-01

    To study the bubble plume's seismic response characteristics, the model of a plume water body has been built in this article using the bubble-contained medium acoustic velocity model and the stochastic medium theory based on an analysis of both the acoustic characteristics of a bubble-contained water body and the actual features of a plume. The finite difference method is used for forward modelling, and the single-shot seismic record exhibits the characteristics of a scattered wave field generated by a plume. A meaningful conclusion is obtained by extracting seismic attributes from the pre-stack shot gather record of a plume. The values of the amplitude-related seismic attributes increase greatly as the bubble content goes up, and changes in bubble radius will not cause seismic attributes to change, which is primarily observed because the bubble content has a strong impact on the plume's acoustic velocity, while the bubble radius has a weak impact on the acoustic velocity. The above conclusion provides a theoretical reference for identifying hydrate plumes using seismic methods and contributes to further study on hydrate decomposition and migration, as well as on distribution of the methane bubble in seawater.

  3. A multi-disciplinary approach for the structural monitoring of Cultural Heritages in a seismic area

    NASA Astrophysics Data System (ADS)

    Fabrizia Buongiorno, Maria; Musacchio, Massimo; Guerra, Ignazio; Porco, Giacinto; Stramondo, Salvatore; Casula, Giuseppe; Caserta, Arrigo; Speranza, Fabio; Doumaz, Fawzi; Giovanna Bianchi, Maria; Luzi, Guido; Ilaria Pannaccione Apa, Maria; Montuori, Antonio; Gaudiosi, Iolanda; Vecchio, Antonio; Gervasi, Anna; Bonali, Elena; Romano, Dolores; Falcone, Sergio; La Piana, Carmelo

    2014-05-01

    In the recent years, the concepts of seismic risk vulnerability and structural health monitoring have become very important topics in the field of both structural and civil engineering for the identification of appropriate risk indicators and risk assessment methodologies in Cultural Heritages monitoring. The latter, which includes objects, building and sites with historical, architectural and/or engineering relevance, concerns the management, the preservation and the maintenance of the heritages within their surrounding environmental context, in response to climate changes and natural hazards (e.g. seismic, volcanic, landslides and flooding hazards). Within such a framework, the complexity and the great number of variables to be considered require a multi-disciplinary approach including strategies, methodologies and tools able to provide an effective monitoring of Cultural Heritages form both scientific and operational viewpoints. Based on this rationale, in this study, an advanced, technological and operationally-oriented approach is presented and tested, which enables measuring and monitoring Cultural Heritage conservation state and geophysical/geological setting of the area, in order to mitigate the seismic risk of the historical public goods at different spatial scales*. The integration between classical geophysical methods with new emerging sensing techniques enables a multi-depth, multi-resolution, and multi-scale monitoring in both space and time. An integrated system of methodologies, instrumentation and data-processing approaches for non-destructive Cultural Heritage investigations is proposed, which concerns, in detail, the analysis of seismogenetic sources, the geological-geotechnical setting of the area and site seismic effects evaluation, proximal remote sensing techniques (e.g. terrestrial laser scanner, ground-based radar systems, thermal cameras), high-resolution aerial and satellite-based remote sensing methodologies (e.g. aeromagnetic surveys, synthetic aperture radar, optical, multispectral and panchromatic measurements), static and dynamic structural health monitoring analysis (e.g. screening tests with georadar, sonic instruments, sclerometers and optic fibers). The final purpose of the proposed approach is the development of an investigation methodology for short- and long-term Cultural Heritages preservation in response to seismic stress, which has specific features of scalability, modularity and exportability for every possible monitoring configuration. Moreover, it allows gathering useful information to furnish guidelines for Institution and local Administration to plan consolidation actions and therefore prevention activity. Some preliminary results will be presented for the test site of Calabria Region, where some architectural heritages have been properly selected as case studies for monitoring purposes. *The present work is supported and funded by Ministero dell'Università, dell'Istruzione e della Ricerca (MIUR) under the research project PON01-02710 "MASSIMO" - "Monitoraggio in Area Sismica di Sistemi Monumentali".

  4. The damping of seismic waves and its determination from reflection seismograms

    NASA Technical Reports Server (NTRS)

    Engelhard, L.

    1979-01-01

    The damping in theoretical waveforms is described phenomenologically and a classification is proposed. A method for studying the Earth's crust was developed which includes this damping as derived from reflection seismograms. Seismic wave propagation by absorption, attenuation of seismic waves by scattering, and dispersion relations are considered. Absorption of seismic waves within the Earth as well as reflection and transmission of elastic waves seen through boundary layer absorption are also discussed.

  5. An Ensemble Approach for Improved Short-to-Intermediate-Term Seismic Potential Evaluation

    NASA Astrophysics Data System (ADS)

    Yu, Huaizhong; Zhu, Qingyong; Zhou, Faren; Tian, Lei; Zhang, Yongxian

    2017-06-01

    Pattern informatics (PI), load/unload response ratio (LURR), state vector (SV), and accelerating moment release (AMR) are four previously unrelated subjects, which are sensitive, in varying ways, to the earthquake's source. Previous studies have indicated that the spatial extent of the stress perturbation caused by an earthquake scales with the moment of the event, allowing us to combine these methods for seismic hazard evaluation. The long-range earthquake forecasting method PI is applied to search for the seismic hotspots and identify the areas where large earthquake could be expected. And the LURR and SV methods are adopted to assess short-to-intermediate-term seismic potential in each of the critical regions derived from the PI hotspots, while the AMR method is used to provide us with asymptotic estimates of time and magnitude of the potential earthquakes. This new approach, by combining the LURR, SV and AMR methods with the choice of identified area of PI hotspots, is devised to augment current techniques for seismic hazard estimation. Using the approach, we tested the strong earthquakes occurred in Yunnan-Sichuan region, China between January 1, 2013 and December 31, 2014. We found that most of the large earthquakes, especially the earthquakes with magnitude greater than 6.0 occurred in the seismic hazard regions predicted. Similar results have been obtained in the prediction of annual earthquake tendency in Chinese mainland in 2014 and 2015. The studies evidenced that the ensemble approach could be a useful tool to detect short-to-intermediate-term precursory information of future large earthquakes.

  6. Optimized suppression of coherent noise from seismic data using the Karhunen-Loève transform

    NASA Astrophysics Data System (ADS)

    Montagne, Raúl; Vasconcelos, Giovani L.

    2006-07-01

    Signals obtained in land seismic surveys are usually contaminated with coherent noise, among which the ground roll (Rayleigh surface waves) is of major concern for it can severely degrade the quality of the information obtained from the seismic record. This paper presents an optimized filter based on the Karhunen-Loève transform for processing seismic images contaminated with ground roll. In this method, the contaminated region of the seismic record, to be processed by the filter, is selected in such way as to correspond to the maximum of a properly defined coherence index. The main advantages of the method are that the ground roll is suppressed with negligible distortion of the remnant reflection signals and that the filtering procedure can be automated. The image processing technique described in this study should also be relevant for other applications where coherent structures embedded in a complex spatiotemporal pattern need to be identified in a more refined way. In particular, it is argued that the method is appropriate for processing optical coherence tomography images whose quality is often degraded by coherent noise (speckle).

  7. Mathematical model of the seismic electromagnetic signals (SEMS) in non crystalline substances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, L. C. C.; Yahya, N.; Daud, H.

    The mathematical model of seismic electromagnetic waves in non crystalline substances is developed and the solutions are discussed to show the possibility of improving the electromagnetic waves especially the electric field. The shear stress of the medium in fourth order tensor gives the equation of motion. Analytic methods are selected for the solutions written in Hansen vector form. From the simulated SEMS, the frequency of seismic waves has significant effects to the SEMS propagating characteristics. EM waves transform into SEMS or energized seismic waves. Traveling distance increases once the frequency of the seismic waves increases from 100% to 1000%. SEMSmore » with greater seismic frequency will give seismic alike waves but greater energy is embedded by EM waves and hence further distance the waves travel.« less

  8. An efficient repeating signal detector to investigate earthquake swarms

    NASA Astrophysics Data System (ADS)

    Skoumal, Robert J.; Brudzinski, Michael R.; Currie, Brian S.

    2016-08-01

    Repetitive earthquake swarms have been recognized as key signatures in fluid injection induced seismicity, precursors to volcanic eruptions, and slow slip events preceding megathrust earthquakes. We investigate earthquake swarms by developing a Repeating Signal Detector (RSD), a computationally efficient algorithm utilizing agglomerative clustering to identify similar waveforms buried in years of seismic recordings using a single seismometer. Instead of relying on existing earthquake catalogs of larger earthquakes, RSD identifies characteristic repetitive waveforms by rapidly identifying signals of interest above a low signal-to-noise ratio and then grouping based on spectral and time domain characteristics, resulting in dramatically shorter processing time than more exhaustive autocorrelation approaches. We investigate seismicity in four regions using RSD: (1) volcanic seismicity at Mammoth Mountain, California, (2) subduction-related seismicity in Oaxaca, Mexico, (3) induced seismicity in Central Alberta, Canada, and (4) induced seismicity in Harrison County, Ohio. In each case, RSD detects a similar or larger number of earthquakes than existing catalogs created using more time intensive methods. In Harrison County, RSD identifies 18 seismic sequences that correlate temporally and spatially to separate hydraulic fracturing operations, 15 of which were previously unreported. RSD utilizes a single seismometer for earthquake detection which enables seismicity to be quickly identified in poorly instrumented regions at the expense of relying on another method to locate the new detections. Due to the smaller computation overhead and success at distances up to ~50 km, RSD is well suited for real-time detection of low-magnitude earthquake swarms with permanent regional networks.

  9. Seismic-monitoring changes and the remote deployment of seismic stations (seismic spider) at Mount St. Helens, 2004-2005: Chapter 7 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    McChesney, Patrick J.; Couchman, Marvin R.; Moran, Seth C.; Lockhart, Andrew B.; Swinford, Kelly J.; LaHusen, Richard G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    The instruments in place at the start of volcanic unrest at Mount St. Helens in 2004 were inadequate to record the large earthquakes and monitor the explosions that occurred as the eruption developed. To remedy this, new instruments were deployed and the short-period seismic network was modified. A new method of establishing near-field seismic monitoring was developed, using remote deployment by helicopter. The remotely deployed seismic sensor was a piezoelectric accelerometer mounted on a surface-coupled platform. Remote deployment enabled placement of stations within 250 m of the active vent.

  10. Fault specific GIS based seismic hazard maps for the Attica region, Greece

    NASA Astrophysics Data System (ADS)

    Deligiannakis, G.; Papanikolaou, I. D.; Roberts, G.

    2018-04-01

    Traditional seismic hazard assessment methods are based on the historical seismic records for the calculation of an annual probability of exceedance for a particular ground motion level. A new fault-specific seismic hazard assessment method is presented, in order to address problems related to the incompleteness and the inhomogeneity of the historical records and to obtain higher spatial resolution of hazard. This method is applied to the region of Attica, which is the most densely populated area in Greece, as nearly half of the country's population lives in Athens and its surrounding suburbs, in the Greater Athens area. The methodology is based on a database of 24 active faults that could cause damage to Attica in case of seismic rupture. This database provides information about the faults slip rates, lengths and expected magnitudes. The final output of the method is four fault-specific seismic hazard maps, showing the recurrence of expected intensities for each locality. These maps offer a high spatial resolution, as they consider the surface geology. Despite the fact that almost half of the Attica region lies on the lowest seismic risk zone according to the official seismic hazard zonation of Greece, different localities have repeatedly experienced strong ground motions during the last 15 kyrs. Moreover, the maximum recurrence for each intensity occurs in different localities across Attica. Highest recurrence for intensity VII (151-156 times over 15 kyrs, or up to a 96 year return period) is observed in the central part of the Athens basin. The maximum intensity VIII recurrence (115 times over 15 kyrs, or up to a 130 year return period) is observed in the western part of Attica, while the maximum intensity IX (73-77/15 kyrs, or a 195 year return period) and X (25-29/15 kyrs, or a 517 year return period) recurrences are observed near the South Alkyonides fault system, which dominates the strong ground motions hazard in the western part of the Attica mainland.

  11. 3D basin structure of the Santa Clara Valley constrained by ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Cho, H.; Lee, S. J.; Rhie, J.; Kim, S.

    2017-12-01

    The basin structure is an important factor controls the intensity and duration of ground shaking due to earthquake. Thus it is important to study the basin structure for better understanding seismic hazard and also improving the earthquake preparedness. An active source seismic survey is the most appropriate method to determine the basin structure in detail but its applicability, especially in urban areas, is limited. In this study, we tested the potential of an ambient noise tomography, which can be a cheaper and more easily applicable method compared to a traditional active source survey, to construct the velocity model of the basin. Our testing region is the Santa Clara Valley, which is one of the major urban sedimentary basins in the States. We selected this region because continuous seismic recordings and well defined velocity models are available. Continuous seismic recordings of 6 months from short-period array of Santa Clara Valley Seismic Experiment are cross-correlated with 1 hour time window. And the fast marching method and the subspace method are jointly applied to construct 2-D group velocity maps between 0.2 - 4.0 Hz. Then, shear wave velocity model of the Santa Clara Valley is calculated up to 5 km depth using bayesian inversion technique. Although our model cannot depict the detailed structures, it is roughly comparable with the velocity model of the US Geological Survey, which is constrained by active seismic surveys and field researches. This result indicate that an ambient noise tomography can be a replacement, at least in part, of an active seismic survey to construct the velocity model of the basin.

  12. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  13. Study of iron deposit using seismic refraction and resistivity in Carajás Mineral Province, Brazil

    NASA Astrophysics Data System (ADS)

    Nogueira, Pedro Vencovsky; Rocha, Marcelo Peres; Borges, Welitom Rodrigues; Silva, Adalene Moreira; Assis, Luciano Mozer de

    2016-10-01

    This work comprises the acquisition, processing and interpretation of 2D seismic shallow refraction (P-wave) and resistivity profiles located in the iron ore deposit of N4WS, Carajás Mineral Province (CMP), northern Brazil. The geophysical methods were used to identify the boundaries of the iron ore deposit. Another objective was to evaluate the potentiality of these geophysical methods in that geological context. In order to validate the results, the geophysical lines were located to match a geological borehole line. For the seismic refraction, we used 120 channels, spaced by 10 m, in a line of 1190 m, with seven shot points. The resistivity method used in the acquisition was the electrical resistivity imaging, with pole-pole array, in order to reach greater depths. The resistivity line had a length of 1430 m, with 10 m spacing between electrodes. The seismic results produced a model with two distinct layers. Based on the velocities values, the first layer was interpreted as altered rocks, and the second layer as more preserved rocks. It was not possible to discriminate different lithologies with the seismic method inside each layer. From the resistivity results, a zone of higher resistivity (> 3937 Ω·m) was interpreted as iron ore, and a region of intermediate resistivity (from 816 to 2330 Ω·m) as altered rocks. These two regions represent the first seismic layer. On the second seismic layer, an area with intermediated resistivity values (from 483 to 2330 Ω·m) was interpreted as mafic rocks, and the area with lower resistivity (< 483 Ω·m) as jaspilite. Our results were compared with geological boreholes and show reasonable correlation, suggesting that the geophysical anomalies correspond to the main variations in composition and physical properties of rocks.

  14. Data-Intensive Discovery Methods for Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Richards, P. G.; Schaff, D. P.; Young, C. J.; Slinkard, M.; Heck, S.; Ammon, C. J.; Cleveland, M.

    2011-12-01

    For most regions of our planet, earthquakes and explosions are still located one-at-a-time using seismic phase picks-a procedure that has not fundamentally changed for more than a century. But methods that recognize and use seismogram archives as a major resource, enabling comparisons of waveforms recorded from neighboring events and relocating numerous events relative to each other, have been successfully demonstrated, especially for California, where they have enabled new insights into earthquake physics and Earth structure, and have raised seismic monitoring to new levels. We are beginning a series of projects to evaluate such data-intensive methods on ever-larger scales, using cross correlation (CC) to analyze seismicity in three different ways: (1) to find repeating earthquakes (whose waveforms are very similar, so the CC value measured over long windows must be high); (2) to measure time differences and amplitude differences to enable precise relocations and relative amplitude studies, of seismic events with respect to their neighboring events (then CC can be much lower, yet still give a better estimate of arrival time differences and relative amplitudes, compared to differencing phase picks and magnitudes); and, perhaps most importantly, (3) as a detector, to find new events in current data streams that are similar to events already in the archive, or to add to the number of detections of an already known event. Experience documented by Schaff and Waldhauser (2005) for California and Schaff (2009) for China indicates that the great majority of events in seismically active regions generate waveforms that are sufficiently similar to the waveforms of neighboring events to allow CC methods to be used to obtain relative locations. Schaff (2008, 2010) has demonstrated the capability of CC methods to achieve detections, with minimal false alarms, down to more than a magnitude unit below conventional STA/LTA detectors though CC methods are far more computationally-intensive. Elsewhere at this meeting Cleveland, Ammon, and Van DeMark report in more detail on greatly-improved event locations along oceanic fracture zones using CC methods applied to 40-80s Rayleigh waves; and Slinkard, Carr, Heck and Young at Sandia have reported greatly-improved computational approaches that reduce CPU demands from hours using a fast workstation to minutes using a GPU, when a continuous data stream lasting several days is searched (using CC methods) for seismic signals similar to those of hundreds of previously documented events. From diverse results such as these, it seems appropriate to consider the future possibility of radical improvement in monitoring virtually all seismically active areas, using archives of prior events as the major resource-though we recognize that such an approach does not directly help to characterize seismic events in inactive regions, or events in active regions which are dissimilar to previously recorded events.

  15. Tomographic imaging of the shallow crustal structure of the East Pacific Rise at 9 deg 30 min N

    NASA Astrophysics Data System (ADS)

    Toomey, Douglas R.; Solomon, Sean C.; Purdy, G. M.

    1994-12-01

    Compressional wave travel times from a seismic tomography experiment at 9 deg 30 min N on the East Pacific Rise are analyzed by a new tomographic method to determine the three-dimensional seismic velocity structure of the upper 2.5 km of oceanic crust within a 20 x 18 km area centered on the rise axis. The data comprise the travel times and associated uncertainties of 1459 compressional waves that have propagated above the axial magma chamber. A careful analysis of source and receiver parameters, in conjunction with an automated method of picking P wave onsets and assigning uncertainties, constrains the prior uncertainty in the data to 5 to 20 ms. The new tomographic method employs graph theory to estimate ray paths and travel times through strongly heterogeneous and densely parameterized seismic velocity models. The nonlinear inverse method uses a jumping strategy to minimize a functional that includes the penalty function, horizontal and vertical smoothing constraints, and prior model assumptions; all constraints applied to model perturbations are normalized to remove bias. We use the tomographic method to reject the null hypothesis that the axial seismic structure is two-dimensional. Three-dimensional models reveal a seismic structure that correlates well with cross- and along-axis variations in seafloor morphology, the location of the axial summit caldera, and the distribution of seafloor hydrothermal activity. The along-axis segmentation of the seismic structure above the axial magma chamber is consistent with the hypothesis that mantle-derived melt is preferentially injected midway along a locally linear segment of the rise and that the architecture of the crustal section is characterized by an en echelon series of elongate axial volcanoes approximately 10 km in length. The seismic data are compatible with a 300- to 500-m-thick thermal anomaly above a midcrustal melt lens; such an interpretation suggests that hydrothermal fluids may not have penetrated this region in the last 10(exp 3) years. Asymmetries in the seismic structure across the rise support the inferences that the thickness of seismic layer 2 and the average midcrustal temperature increase to the west of the rise axis. These anomalies may be the result of off-axis magmatism; alternatively, the asymmetric thermal anomaly may be the consequence of differences in the depth extent of hydrothermal cooling.

  16. Applying Binary Forecasting Approaches to Induced Seismicity in the Western Canada Sedimentary Basin

    NASA Astrophysics Data System (ADS)

    Kahue, R.; Shcherbakov, R.

    2016-12-01

    The Western Canada Sedimentary Basin has been chosen as a focus due to an increase in the recent observed seismicity there which is most likely linked to anthropogenic activities related to unconventional oil and gas exploration. Seismicity caused by these types of activities is called induced seismicity. The occurrence of moderate to larger induced earthquakes in areas where critical infrastructure is present can be potentially problematic. Here we use a binary forecast method to analyze past seismicity and well production data in order to quantify future areas of increased seismicity. This method splits the given region into spatial cells. The binary forecast method used here has been suggested in the past to retroactively forecast large earthquakes occurring globally in areas called alarm cells. An alarm cell, or alert zone, is a bin in which there is a higher likelihood for earthquakes to occur based on previous data. The first method utilizes the cumulative Benioff strain, based on earthquakes that had occurred in each bin above a given magnitude over a time interval called the training period. The second method utilizes the cumulative well production data within each bin. Earthquakes that occurred within an alert zone in the retrospective forecast period contribute to the hit rate, while alert zones that did not have an earthquake occur within them in the forecast period contribute to the false alarm rate. In the resulting analysis the hit rate and false alarm rate are determined after optimizing and modifying the initial parameters using the receiver operating characteristic diagram. It is found that when modifying the cell size and threshold magnitude parameters within various training periods, hit and false alarm rates are obtained for specific regions in Western Canada using both recent seismicity and cumulative well production data. Certain areas are thus shown to be more prone to potential larger earthquakes based on both datasets. This has implications for the potential link between oil and gas production and induced seismicity observed in the Western Canada Sedimentary Basin.

  17. A mean-based filter to remove power line harmonic noise from seismic reflection data

    NASA Astrophysics Data System (ADS)

    Karslı, Hakan; Dondurur, Derman

    2018-06-01

    Power line harmonic noise generated by power lines during the seismic data acquisition in land and marine seismic surveys generally appears as a single frequency with 50/60 Hz (or multiples of these frequencies) and contaminates seismic data leading to complicate the identification of fine details in the data. Commonly applied method during seismic data processing to remove the harmonic noise is classical notch filter (or very narrow band-stop filter), however, it also attenuates all recorded data around the notch frequencies and results in a complete loss of available information which corresponds to fine details in the seismic data. In this study, we introduce an application of the algorithm of iterative trimmed and truncated mean filter method (ITTM) to remove the harmonic noise from seismic data, and here, we name the method as local ITTM (LITTM) since we applied it to the seismic data locally in spectral domain. In this method, an optimal value is iteratively searched depending on a threshold value by trimming and truncating process for the spectral amplitude samples within the specified spectral window. Therefore, the LITTM filter converges to the median, but, there is no need to sort the data as in the case of conventional median filters. On the other hand, the LITTM filtering process doesn't require any reference signal or a precise estimate of the fundamental frequency of the harmonic noise, and only approximate frequency band of the noise within the amplitude spectra is considered. The only required parameter of the method is the width of this frequency band in the spectral domain. The LITTM filter is first applied to synthetic data and then we analyze a real marine dataset to compare the quality of the output after removing the power line noise by classical notch, median and proposed LITTM filters. We observe that the power line harmonic noise is completely filtered out by LITTM filter, and unlike the conventional notch filter, without any damage on the available frequencies around the notch frequency band. It also provides a more balanced amplitude spectrum since it does not produce amplitude notches in the spectrum.

  18. Instantaneous Frequency Attribute Comparison

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.

    2013-12-01

    The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.

  19. Seismometer Self-Noise and Measuring Methods

    USGS Publications Warehouse

    Ringler, Adam; R. Sleeman,; Hutt, Charles R.; Gee, Lind S.

    2014-01-01

    Seismometer self-noise is usually not considered when selecting and using seismic waveform data in scientific research as it is typically assumed that the self-noise is negligibly small compared to seismic signals. However, instrumental noise is part of the noise in any seismic record, and in particular, at frequencies below a few mHz, the instrumental noise has a frequency-dependent character and may dominate the noise. When seismic noise itself is considered as a carrier of information, as in seismic interferometry (e.g., Chaput et al. 2012), it becomes extremely important to estimate the contribution of instrumental noise to the recordings.

  20. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  1. Forecasting of Energy Expenditure of Induced Seismicity with Use of Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Cichy, Tomasz; Banka, Piotr

    2017-12-01

    Coal mining in many Polish mines in the Upper Silesian Coal Basin is accompanied by high levels of induced seismicity. In mining plants, the methods of shock monitoring are improved, allowing for more accurate localization of the occurring phenomena and determining their seismic energy. Equally important is the development of ways of forecasting seismic hazards that may occur while implementing mine design projects. These methods, depending on the length of time for which the forecasts are made, can be divided into: longterm, medium-term, short-term and so-called alarm. Long-term forecasts are particularly useful for the design of seam exploitations. The paper presents a method of predicting changes in energy expenditure of shock using a properly trained artificial neural network. This method allows to make long-term forecasts at the stage of the mine’s exploitation design, thus enabling the mining work plans to be reviewed to minimize the potential for tremors. The information given at the input of the neural network is indicative of the specific energy changes of the elastic deformation occurring in the selected, thick, resistant rock layers (tremor-prone layers). Energy changes, taking place in one or more tremor-prone layers are considered. These indicators describe only the specific energy changes of the elastic deformation accumulating in the rock as a consequence of the mining operation, but does not determine the amount of energy released during the destruction of a given volume of rock. In this process, the potential energy of elastic strain transforms into other, non-measurable energy types, including the seismic energy of recorded tremors. In this way, potential energy changes affect the observed induced seismicity. The parameters used are characterized by increases (declines) of specific energy with separation to occur before the hypothetical destruction of the rock and after it. Additional input information is an index characterizing the rate of tectonic faults. This parameter was not included in previous research by authors. At the output of the artificial neural network, the values of the energy density of the mining tremors [J/m3] are obtained. An example of the predicted change in seismicity induced for a highly threatened region is presented. Relatively good predicted and observed energy expenditure of tremors was obtained. The presented method can complement existing methods (analytical and geophysical) forecasting seismic hazard. This method can be used primarily in those areas where the seismic level is determined by the configuration of the edges and residues in the operating seam, as well as in adjacent seams, and to a lesser extent, the geological structure of the rock The method is local, it means that the artificial neural network prediction can only be performed for the region from which the data have been used for its originated learning. The developed method cannot be used in areas where mining is just beginning and it is not possible to predict the level of seismicity induced in areas where no mining tremors have been recorded so far.

  2. Wave-propagation formulation of seismic response of multistory buildings

    USGS Publications Warehouse

    Safak, E.

    1999-01-01

    This paper presents a discrete-time wave-propagation method to calculate the seismic response of multistory buildings, founded on layered soil media and subjected to vertically propagating shear waves. Buildings are modeled as an extension of the layered soil media by considering each story as another layer in the wave-propagation path. The seismic response is expressed in terms of wave travel times between the layers and wave reflection and transmission coefficients at layer interfaces. The method accounts for the filtering effects of the concentrated foundation and floor masses. Compared with commonly used vibration formulation, the wave-propagation formulation provides several advantages, including simplicity, improved accuracy, better representation of damping, the ability to incorporate the soil layers under the foundation, and providing better tools for identification and damage detection from seismic records. Examples are presented to show the versatility and the superiority of the method.

  3. Permafrost Active Layer Seismic Interferometry Experiment (PALSIE).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Robert; Knox, Hunter Anne; James, Stephanie

    2016-01-01

    We present findings from a novel field experiment conducted at Poker Flat Research Range in Fairbanks, Alaska that was designed to monitor changes in active layer thickness in real time. Results are derived primarily from seismic data streaming from seven Nanometric Trillium Posthole seismometers directly buried in the upper section of the permafrost. The data were evaluated using two analysis methods: Horizontal to Vertical Spectral Ratio (HVSR) and ambient noise seismic interferometry. Results from the HVSR conclusively illustrated the method's effectiveness at determining the active layer's thickness with a single station. Investigations with the multi-station method (ambient noise seismic interferometry)more » are continuing at the University of Florida and have not yet conclusively determined active layer thickness changes. Further work continues with the Bureau of Land Management (BLM) to determine if the ground based measurements can constrain satellite imagery, which provide measurements on a much larger spatial scale.« less

  4. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  5. Improving Thin Bed Identification in Sarawak Basin Field using Short Time Fourier Transform Half Cepstrum (STFTHC) method

    NASA Astrophysics Data System (ADS)

    Nizarul, O.; Hermana, M.; Bashir, Y.; Ghosh, D. P.

    2016-02-01

    In delineating complex subsurface geological feature, broad band of frequencies are needed to unveil the often hidden features of hydrocarbon basin such as thin bedding. The ability to resolve thin geological horizon on seismic data is recognized to be a fundamental importance for hydrocarbon exploration, seismic interpretation and reserve prediction. For thin bedding, high frequency content is needed to enable tuning, which can be done by applying the band width extension technique. This paper shows an application of Short Time Fourier Transform Half Cepstrum (STFTHC) method, a frequency bandwidth expansion technique for non-stationary seismic signal in increasing the temporal resolution to uncover thin beds and improve characterization of the basin. A wedge model and synthetic seismic data is used to quantify the algorithm as well as real data from Sarawak basin were used to show the effectiveness of this method in enhancing the resolution.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  7. 2D Time-lapse Seismic Tomography Using An Active Time Constraint (ATC) Approach

    EPA Science Inventory

    We propose a 2D seismic time-lapse inversion approach to image the evolution of seismic velocities over time and space. The forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wave-paths are represented by Fresnel volumes rathe...

  8. Reversible rigid coupling apparatus and method for borehole seismic transducers

    DOEpatents

    Owen, Thomas E.; Parra, Jorge O.

    1992-01-01

    An apparatus and method of high resolution reverse vertical seismic profile (VSP) measurements is shown. By encapsulating the seismic detector and heaters in a meltable substance (such as wax), the seismic detector can be removably secured in a borehole in a manner capable of measuring high resolution signals in the 100 to 1000 hertz range and higher. The meltable substance is selected to match the overall density of the detector package with the underground formation, yet still have relatively low melting point and rigid enough to transmit vibrations to accelerometers in the seismic detector. To minimize voids in the meltable substance upon solidification, the meltable substance is selected for minimum shrinkage, yet still having the other desirable characteristics. Heaters are arranged in the meltable substance in such a manner to allow the lowermost portion of the meltable substance to cool and solidify first. Solidification continues upwards from bottom-to-top until the top of the meltable substance is solidified and the seismic detector is ready for use. To remove, the heaters melt the meltable substance and the detector package is pulled from the borehole.

  9. Numerical modeling of time-lapse monitoring of CO2 sequestration in a layered basalt reservoir

    USGS Publications Warehouse

    Khatiwada, M.; Van Wijk, K.; Clement, W.P.; Haney, M.

    2008-01-01

    As part of preparations in plans by The Big Sky Carbon Sequestration Partnership (BSCSP) to inject CO2 in layered basalt, we numerically investigate seismic methods as a noninvasive monitoring technique. Basalt seems to have geochemical advantages as a reservoir for CO2 storage (CO2 mineralizes quite rapidly while exposed to basalt), but poses a considerable challenge in term of seismic monitoring: strong scattering from the layering of the basalt complicates surface seismic imaging. We perform numerical tests using the Spectral Element Method (SEM) to identify possibilities and limitations of seismic monitoring of CO2 sequestration in a basalt reservoir. While surface seismic is unlikely to detect small physical changes in the reservoir due to the injection of CO2, the results from Vertical Seismic Profiling (VSP) simulations are encouraging. As a perturbation, we make a 5%; change in wave velocity, which produces significant changes in VSP images of pre-injection and post-injection conditions. Finally, we perform an analysis using Coda Wave Interferometry (CWI), to quantify these changes in the reservoir properties due to CO2 injection.

  10. Sensitivities Kernels of Seismic Traveltimes and Amplitudes for Quality Factor and Boundary Topography

    NASA Astrophysics Data System (ADS)

    Hsieh, M.; Zhao, L.; Ma, K.

    2010-12-01

    Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.

  11. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    NASA Astrophysics Data System (ADS)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the pitfall area is reduced and some morphed as background lithology. Gabor Deconvolution removes the attenuation by performing Gabor Domain spectral division, which in extension also reduces interpretation pitfall in deeper target seismic.

  12. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  13. Full Waveform Modelling for Subsurface Characterization with Converted-Wave Seismic Reflection

    NASA Astrophysics Data System (ADS)

    Triyoso, Wahyu; Oktariena, Madaniya; Sinaga, Edycakra; Syaifuddin, Firman

    2017-04-01

    While a large number of reservoirs have been explored using P-waves seismic data, P-wave seismic survey ceases to provide adequate result in seismically and geologically challenging areas, like gas cloud, shallow drilling hazards, strong multiples, highly fractured, anisotropy. Most of these reservoir problems can be addressed using P and PS seismic data combination. Multicomponent seismic survey records both P-wave and S-wave unlike conventional survey that only records compressional P-wave. Under certain conditions, conventional energy source can be used to record P and PS data using the fact that compressional wave energy partly converts into shear waves at the reflector. Shear component can be recorded using down going P-wave and upcoming S-wave by placing a horizontal component geophone on the ocean floor. A synthetic model is created based on real data to analyze the effect of gas cloud existence to PP and PS wave reflections which has a similar characteristic to Sub-Volcanic imaging. The challenge within the multicomponent seismic is the different travel time between P-wave and S-wave, therefore the converted-wave seismic data should be processed with different approach. This research will provide a method to determine an optimum converted point known as Common Conversion Point (CCP) that can solve the Asymmetrical Conversion Point of PS data. The value of γ (Vp/Vs) is essential to estimate the right CCP that will be used in converted-wave seismic processing. This research will also continue to the advanced processing method of converted-wave seismic by applying Joint Inversion to PP&PS seismic. Joint Inversion is a simultaneous model-based inversion that estimates the P&S-wave impedance which are consistent with the PP&PS amplitude data. The result reveals a more complex structure mirrored in PS data below the gas cloud area. Through estimated γ section resulted from Joint Inversion, we receive a better imaging improvement below gas cloud area tribute to the converted-wave seismic as additional constrain.

  14. Environmental protection problems in the vicinity of the Zelazny most flotation wastes depository in Poland.

    PubMed

    Lasocki, Stanislaw; Antoniuk, Janusz; Moscicki, Jerzy

    2003-08-01

    The Zelazny Most depository of wastes from copper-ore processing, located in southwest Poland, is the largest mineral wastes repository in Europe. Moreover, it is located in a seismically active area. The seismicity is induced and is connected with mining works in the nearby underground copper mines. Any release of the contents of the repository to the environment could have devastating and even catastrophic consequences. For this reason, geophysical methods are used for continuous monitoring the state of the repository containment dams. The article presents examples of the application of geoelectric methods for detecting sites of leakage of contaminated water and a sketch of the seismic hazard analysis, which was used to predict future seismic vibrations of the repository dams.

  15. Viability of using seismic data to predict hydrogeological parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mela, K.

    1997-10-01

    Design of modem contaminant mitigation and fluid extraction projects make use of solutions from stochastic hydrogeologic models. These models rely heavily on the hydraulic parameters of hydraulic conductivity and the correlation length of hydraulic conductivity. Reliable values of these parameters must be acquired to successfully predict flow of fluids through the aquifer of interest. An inexpensive method of acquiring these parameters by use of seismic reflection surveying would be beneficial. Relationships between seismic velocity and porosity together with empirical observations relating porosity to permeability may lead to a method of extracting the correlation length of hydraulic conductivity from shallow highmore » resolution seismic data making the use of inexpensive high density data sets commonplace for these studies.« less

  16. A seismic survey of the Manson disturbed area

    NASA Technical Reports Server (NTRS)

    Sendlein, L. V. A.; Smith, T. A.

    1971-01-01

    The region in north-central Iowa referred to as the Manson disturbed area was investigated with the seismic refraction method and the bedrock configuration mapped. The area is approximately 30 km in diameter and is not detectable from the surface topography; however, water wells that penetrate the bedrock indicate that the bedrock is composed of disturbed Cretaceous sediments with a central region approximately 6 km in diameter composed of Precambrian crystalline rock. Seismic velocity differences between the overlying glacial till and the Cretaceous sediments were so small that a statistical program was developed to analyze the data. The program developed utilizes existing 2 segment regression analyses and extends the method to fit 3 or more regression lines to seismic data.

  17. Use of NDT equipment for construction quality control of hot mix asphalt pavements

    DOT National Transportation Integrated Search

    2006-08-01

    The focus of the study has been to evaluate the utility of seismic methods in the quality management of the hot mix asphalt layers. Procedures are presented to measure the target field moduli of hot mix asphalt (HMA) with laboratory seismic methods, ...

  18. Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Morozov, I. B.

    2016-12-01

    Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.

  19. Time-lapse Seismic Refraction Monitoring of an Active Landslide in Lias Group Mudrocks, North Yorkshire, UK

    NASA Astrophysics Data System (ADS)

    Uhlemann, S.; Whiteley, J.; Chambers, J. E.; Inauen, C.; Swift, R. T.

    2017-12-01

    Geophysical monitoring of the internal moisture content and processes of landslides is an increasingly common approach to the characterisation and assessment of the hydrogeological condition of rainfall-triggered landslides. Geoelectrical monitoring methods are sensitive to changes in the subsurface moisture conditions that cause the failure of unstable slopes, typically through the increase of pore water pressures and softening of materials within the landslide system. The application of seismic methods to the monitoring of landslides has not been as extensively applied as geoelectrical approaches, but the seismic method can determine elastic properties of landslide materials that can characterise and identify changes in the geomechanical condition of landslide systems that also lead to slope failure. We present the results of a seismic refraction monitoring campaign undertaken at the Hollin Hill Landslide Observatory in North Yorkshire, UK. This campaign has involved the repeat acquisition of surface acquired high resolution P- and S-wave seismic refraction data. The monitoring profile traverses a 142m long section from the crest to the toe of an active landslide comprising of mudstone and sandstone. Data were acquired at six to nine week intervals between October 2016 and October 2017. This repeat acquisition approach allowed for the imaging of seismically determined property changes of the landslide throughout the annual climatic cycle. Initial results showed that changes in the moisture dynamics of the landslide are reflected by changes in the seismic character of the inverted tomograms. Changes in the seismic properties are linked to the changes in the annual climatic cycle, particularly in relation to effective rainfall. The results indicate that the incorporation of seismic monitoring data into ongoing geoelectrical monitoring programmes can provide complementary geomechanical data to enhance our understanding of the internal condition of landslide systems. Future development of this integrated approach will allow for the imaging and monitoring of these systems at unprecedented spatial and temporal scales.

  20. Calibration of Seismic Sources during a Test Cruise with the new RV SONNE

    NASA Astrophysics Data System (ADS)

    Engels, M.; Schnabel, M.; Damm, V.

    2015-12-01

    During autumn 2014, several test cruises of the brand new German research vessel SONNE were carried out before the first official scientific cruise started in December. In September 2014, BGR conducted a seismic test cruise in the British North Sea. RV SONNE is a multipurpose research vessel and was also designed for the mobile BGR 3D seismic equipment, which was tested successfully during the cruise. We spend two days for calibration of the following seismic sources of BGR: G-gun array (50 l @ 150 bar) G-gun array (50 l @ 207 bar) single GI-gun (3.4 l @ 150 bar) For this experiment two hydrophones (TC4042 from Reson Teledyne) sampling up to 48 kHz were fixed below a drifting buoy at 20 m and 60 m water depth - the sea bottom was at 80 m depth. The vessel with the seismic sources sailed several up to 7 km long profiles around the buoy in order to cover many different azimuths and distances. We aimed to measure sound pressure level (SPL) and sound exposure level (SEL) under the conditions of the shallow North Sea. Total reflections and refracted waves dominate the recorded wave field, enhance the noise level and partly screen the direct wave in contrast to 'true' deep water calibration based solely on the direct wave. Presented are SPL and RMS power results in time domain, the decay with distance along profiles, and the somehow complicated 2D sound radiation pattern modulated by topography. The shading effect of the vessel's hull is significant. In frequency domain we consider 1/3 octave levels and estimate the amount of energy in frequency ranges not used for reflection seismic processing. Results are presented in comparison of the three different sources listed above. We compare the measured SPL decay with distance during this experiment with deep water modeling of seismic sources (Gundalf software) and with published results from calibrations with other marine seismic sources under different conditions: E.g. Breitzke et al. (2008, 2010) with RV Polarstern, Tolstoy et al. (2004) with RV Ewing and Tolstoy et al. (2009) with RV Langseth, and Crone et al. (2014) with RV Langseth.

  1. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  2. Very-long-period seismic signals - filling the gap between deformation and seismicity

    NASA Astrophysics Data System (ADS)

    Neuberg, Jurgen; Smith, Paddy

    2013-04-01

    Good broadband seismic sensors are capable to record seismic transients with dominant wavelengths of several tens or even hundreds of seconds. This allows us to generate a multi-component record of seismic volcanic events that are located in between the conventional high to low-frequency seismic spectrum and deformation signals. With a much higher temporal resolution and accuracy than e.g. GPS records, these signals fill the gap between seismicity and deformation studies. In this contribution we will review the non-trivial processing steps necessary to retrieve ground deformation from the original velocity seismogram and explore which role the resulting displacement signals have in the analysis of volcanic events. We use examples from Soufriere Hills volcano in Montserrat, West Indies, to discuss the benefits and shortcomings of such methods regarding new insights into volcanic processes.

  3. Multiple field-based methods to assess the potential impacts of seismic surveys on scallops.

    PubMed

    Przeslawski, Rachel; Huang, Zhi; Anderson, Jade; Carroll, Andrew G; Edmunds, Matthew; Hurt, Lynton; Williams, Stefan

    2018-04-01

    Marine seismic surveys are an important tool to map geology beneath the seafloor and manage petroleum resources, but they are also a source of underwater noise pollution. A mass mortality of scallops in the Bass Strait, Australia occurred a few months after a marine seismic survey in 2010, and fishing groups were concerned about the potential relationship between the two events. The current study used three field-based methods to investigate the potential impact of marine seismic surveys on scallops in the region: 1) dredging and 2) deployment of Autonomous Underwater Vehicles (AUVs) were undertaken to examine the potential response of two species of scallops (Pecten fumatus, Mimachlamys asperrima) before, two months after, and ten months after a 2015 marine seismic survey; and 3) MODIS satellite data revealed patterns of sea surface temperatures from 2006-2016. Results from the dredging and AUV components show no evidence of scallop mortality attributable to the seismic survey, although sub-lethal effects cannot be excluded. The remote sensing revealed a pronounced thermal spike in the eastern Bass Strait between February and May 2010, overlapping the scallop beds that suffered extensive mortality and coinciding almost exactly with dates of operation for the 2010 seismic survey. The acquisition of in situ data coupled with consideration of commercial seismic arrays meant that results were ecologically realistic, while the paired field-based components (dredging, AUV imagery) provided a failsafe against challenges associated with working wholly in the field. This study expands our knowledge of the potential environmental impacts of marine seismic survey and will inform future applications for marine seismic surveys, as well as the assessment of such applications by regulatory authorities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Field test investigation of high sensitivity fiber optic seismic geophone

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Min, Li; Zhang, Xiaolei; Zhang, Faxiang; Sun, Zhihui; Li, Shujuan; Wang, Chang; Zhao, Zhong; Hao, Guanghu

    2017-10-01

    Seismic reflection, whose measured signal is the artificial seismic waves ,is the most effective method and widely used in the geophysical prospecting. And this method can be used for exploration of oil, gas and coal. When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of geophones at the surface. Compared to traditional geophones such as electric, magnetic, mechanical and gas geophone, optical fiber geophones have many advantages. Optical fiber geophones can achieve sensing and signal transmission simultaneously. With the development of fiber grating sensor technology, fiber bragg grating (FBG) is being applied in seismic exploration and draws more and more attention to its advantage of anti-electromagnetic interference, high sensitivity and insensitivity to meteorological conditions. In this paper, we designed a high sensitivity geophone and tested its sensitivity, based on the theory of FBG sensing. The frequency response range is from 10 Hz to 100 Hz and the acceleration of the fiber optic seismic geophone is over 1000pm/g. sixteen-element fiber optic seismic geophone array system is presented and the field test is performed in Shengli oilfield of China. The field test shows that: (1) the fiber optic seismic geophone has a higher sensitivity than the traditional geophone between 1-100 Hz;(2) The low frequency reflection wave continuity of fiber Bragg grating geophone is better.

  5. Seismic Monitoring of Permafrost During Controlled Thaw: An Active-Source Experiment Using a Surface Orbital Vibrator and Fiber-Optic DAS Arrays

    NASA Astrophysics Data System (ADS)

    Dou, S.; Wood, T.; Lindsey, N.; Ajo Franklin, J. B.; Freifeld, B. M.; Gelvin, A.; Morales, A.; Saari, S.; Ekblaw, I.; Wagner, A. M.; Daley, T. M.; Robertson, M.; Martin, E. R.; Ulrich, C.; Bjella, K.

    2016-12-01

    Thawing of permafrost can cause ground deformations that threaten the integrity of civil infrastructure. It is essential to develop early warning systems that can identify critically warmed permafrost and issue warnings for hazard prevention and control. Seismic methods can play a pivotal role in such systems for at least two reasons: First, seismic velocities are indicative of mechanical strength of the subsurface and thus are directly relevant to engineering properties; Second, seismic velocities in permafrost systems are sensitive to pre-thaw warming, which makes it possible to issue early warnings before the occurrence of hazardous subsidence events. However, several questions remain: What are the seismic signatures that can be effectively used for early warning of permafrost thaw? Can seismic methods provide enough warning times for hazard prevention and control? In this study, we investigate the feasibility of using permanently installed seismic networks for early warnings of permafrost thaw. We conducted continuous active-source seismic monitoring of permafrost that was under controlled heating at CRREL's Fairbanks permafrost experiment station. We used a permanently installed surface orbital vibrator (SOV) as source and surface-trenched DAS arrays as receivers. The SOV is characterized by its excellent repeatability, automated operation, high energy level, and the rich frequency content (10-100 Hz) of the generated wavefields. The fiber-optic DAS arrays allow continuous recording of seismic data with dense spatial sampling (1-meter channel spacing), low cost, and low maintenance. This combination of SOV-DAS provides unique seismic datasets for observing time-lapse changes of warming permafrost at the field scale, hence providing an observational basis for design and development of early warning systems for permafrost thaw.

  6. Seismic data interpolation and denoising by learning a tensor tight frame

    NASA Astrophysics Data System (ADS)

    Liu, Lina; Plonka, Gerlind; Ma, Jianwei

    2017-10-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.

  7. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  8. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.

  9. Instantaneous Attributes Applied to Full Waveform Sonic Log and Seismic Data in Integration of Elastic Properties of Shale Gas Formations in Poland

    NASA Astrophysics Data System (ADS)

    Wawrzyniak-Guz, Kamila

    2018-03-01

    Seismic attributes calculated from full waveform sonic log were proposed as a method that may enhance the interpretation the data acquired at log and seismic scales. Though attributes calculated in the study were the mathematical transformations of amplitude, frequency, phase or time of the acoustic full waveforms and seismic traces, they could be related to the geological factors and/or petrophysical properties of rock formations. Attributes calculated from acoustic full waveforms were combined with selected attributes obtained for seismic traces recorded in the vicinity of the borehole and with petrophysical parameters. Such relations may be helpful in elastic and reservoir properties estimation over the area covered by the seismic survey.

  10. Time-reversibility in seismic sequences: Application to the seismicity of Mexican subduction zone

    NASA Astrophysics Data System (ADS)

    Telesca, L.; Flores-Márquez, E. L.; Ramírez-Rojas, A.

    2018-02-01

    In this paper we investigate the time-reversibility of series associated with the seismicity of five seismic areas of the subduction zone beneath the Southwest Pacific Mexican coast, applying the horizontal visibility graph method to the series of earthquake magnitudes, interevent times, interdistances and magnitude increments. We applied the Kullback-Leibler divergence D that is a metric for quantifying the degree of time-irreversibility in time series. Our findings suggest that among the five seismic areas, Jalisco-Colima is characterized by time-reversibility in all the four seismic series. Our results are consistent with the peculiar seismo-tectonic characteristics of Jalisco-Colima, which is the closest to the Middle American Trench and belongs to the Mexican volcanic arc.

  11. Quake Final Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.

  12. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta

    2017-07-01

    This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.

  13. Seismicity, shear failure and modes of deformation in deep subduction zones

    NASA Technical Reports Server (NTRS)

    Lundgren, Paul R.; Giardini, Domenico

    1992-01-01

    The joint hypocentral determination method is used to relocate deep seismicity reported in the International Seismological Center catalog for earthquakes deeper than 400 km in the Honshu, Bonin, Mariannas, Java, Banda, and South America subduction zones. Each deep seismic zone is found to display planar features of seismicity parallel to the Harvard centroid-moment tensor nodal planes, which are identified as planes of shear failure. The sense of displacement on these planes is one of resistance to deeper penetration.

  14. Seismic properties of fluid bearing formations in magmatic geothermal systems: can we directly detect geothermal activity with seismic methods?

    NASA Astrophysics Data System (ADS)

    Grab, Melchior; Scott, Samuel; Quintal, Beatriz; Caspari, Eva; Maurer, Hansruedi; Greenhalgh, Stewart

    2016-04-01

    Seismic methods are amongst the most common techniques to explore the earth's subsurface. Seismic properties such as velocities, impedance contrasts and attenuation enable the characterization of the rocks in a geothermal system. The most important goal of geothermal exploration, however, is to describe the enthalpy state of the pore fluids, which act as the main transport medium for the geothermal heat, and to detect permeable structures such as fracture networks, which control the movement of these pore fluids in the subsurface. Since the quantities measured with seismic methods are only indirectly related with the fluid state and the rock permeability, the interpretation of seismic datasets is difficult and usually delivers ambiguous results. To help overcome this problem, we use a numerical modeling tool that quantifies the seismic properties of fractured rock formations that are typically found in magmatic geothermal systems. We incorporate the physics of the pore fluids, ranging from the liquid to the boiling and ultimately vapor state. Furthermore, we consider the hydromechanics of permeable structures at different scales from small cooling joints to large caldera faults as are known to be present in volcanic systems. Our modeling techniques simulate oscillatory compressibility and shear tests and yield the P- and S-wave velocities and attenuation factors of fluid saturated fractured rock volumes. To apply this modeling technique to realistic scenarios, numerous input parameters need to be indentified. The properties of the rock matrix and individual fractures were derived from extensive literature research including a large number of laboratory-based studies. The geometries of fracture networks were provided by structural geologists from their published studies of outcrops. Finally, the physical properties of the pore fluid, ranging from those at ambient pressures and temperatures up to the supercritical conditions, were taken from the fluid physics literature. The results of this study allow us to describe the seismic properties as a function of hydrothermal and geological features. We use it in a forward seismic modeling study to examine how the seismic response changes with temporally and/or spatially varying fluid properties.

  15. Experience from the ECORS program in regions of complex geology

    NASA Astrophysics Data System (ADS)

    Damotte, B.

    1993-04-01

    The French ECORS program was launched in 1983 by a cooperation agreement between universities and petroleum companies. Crustal surveys have tried to find explanations for the formation of geological features, such as rifts, mountains ranges or subsidence in sedimentary basins. Several seismic surveys were carried out, some across areas with complex geological structures. The seismic techniques and equipment used were those developed by petroleum geophysicists, adapted to the depth aimed at (30-50 km) and to various physical constraints encountered in the field. In France, ECORS has recorded 850 km of deep seismic lines onshore across plains and mountains, on various kinds of geological formations. Different variations of the seismic method (reflection, refraction, long-offset seismic) were used, often simultaneously. Multiple coverage profiling constitutes the essential part of this data acquisition. Vibrators and dynamite shots were employed with a spread generally 15 km long, but sometimes 100 km long. Some typical seismic examples show that obtaining crustal reflections essentialy depends on two factors: (1) the type and structure of shallow formations, and (2) the sources used. Thus, when seismic energy is strongly absorbed across the first kilometers in shallow formations, or when these formations are highly structured, standard multiple-coverage profiling is not able to provide results beyond a few seconds. In this case, it is recommended to simultaneously carry out long-offset seismic in low multiple coverage. Other more methodological examples show: how the impact on the crust of a surface fault may be evaluated according to the seismic method implemented ( VIBROSEIS 96-fold coverage or single dynamite shot); that vibrators make it possible to implement wide-angle seismic surveying with an offset 80 km long; how to implement the seismic reflection method on complex formations in high mountains. All data were processed using industrial seismic software, which was not always appropriate for records at least 20 s long. Therefore, a specific procedure adapted to deep seismic surveys was developed for several processing steps. The long duration of the VIBROSEIS sweeps often makes it impossible to perform correlation and stack in the recording truck in the field. Such field records were first preprocessed, in order to be later correlated and stacked in the processing center. Because of the long duration of the recordings and the great length of the spread, several types of final sections were replayed, such as: (1) detailed surface sections (0-5 s), (2) entire sections (0-20 s) after data compression, (3) near-trace sections and far-trace sections, which often yield complementary information. Standard methods of reflection migration gave unsatisfactory results. Velocities in depth are inaccurate, the many diffractions do not all come from the vertical plane of the line, and the migration software is poorly adapted to deep crustal reflections. Therefore, migration is often performed graphically from arrivals picked in the time section. Some line-drawings of various onshore lines, especially those across the Alps and the Pyrenees, enable to judge the results obtained by ECORS.

  16. Characterizing the Temporal and Spatial Distribution of Earthquake Swarms in the Puerto Rico - Virgin Island Block

    NASA Astrophysics Data System (ADS)

    Hernandez, F. J.; Lopez, A. M.; Vanacore, E. A.

    2017-12-01

    The presence of earthquake swarms and clusters in the north and northeast of the island of Puerto Rico in the northeastern Caribbean have been recorded by the Puerto Rico Seismic Network (PRSN) since it started operations in 1974. Although clusters in the Puerto Rico-Virgin Island (PRVI) block have been observed for over forty years, the nature of their enigmatic occurrence is still poorly understood. In this study, the entire seismic catalog of the PRSN, of approximately 31,000 seismic events, has been limited to a sub-set of 18,000 events located all along north of Puerto Rico in an effort to characterize and understand the underlying mechanism of these clusters. This research uses two de-clustering methods to identify cluster events in the PRVI block. The first method, known as Model Independent Stochastic Declustering (MISD), filters the catalog sub-set into cluster and background seismic events, while the second method uses a spatio-temporal algorithm applied to the catalog in order to link the separate seismic events into clusters. After using these two methods, identified clusters were classified into either earthquake swarms or seismic sequences. Once identified, each cluster was analyzed to identify correlations against other clusters in their geographic region. Results from this research seek to : (1) unravel their earthquake clustering behavior through the use of different statistical methods and (2) better understand the mechanism for these clustering of earthquakes. Preliminary results have allowed to identify and classify 128 clusters categorized in 11 distinctive regions based on their centers, and their spatio-temporal distribution have been used to determine intra- and interplate dynamics.

  17. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  18. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  19. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  20. Multi-Phenomenological Analysis of the 12 August 2015 Tianjin, China Chemical Explosion

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Kim, K.; Park, J.; Stump, B. W.; Hayward, C.; Che, I. Y.; Zhao, L.; Myers, S. C.

    2016-12-01

    We perform a multi-phenomenological analysis of the massive near-surface chemical explosions that occurred in Tianjin, China on 12 August 2015. A recent assessment of these events was performed by Zhao et al. (2016) using local (< 100 km) seismic data. This study considers a regional assessment of the same sequence in the absence of having any local data. We provide additional insight by combining regional seismic analysis with the use of infrasound signals and an assessment of the event crater. Event locations using infrasound signals recorded at Korean and IMS arrays are estimated based on the Bayesian Infrasonic Source Location (BISL) method (Modrak et al., 2010), and improved with azimuthal corrections using a raytracing (Blom and Waxler, 2012) and the Ground-to-Space (G2S) atmospheric models (Drob et al., 2003). The location information provided from the infrasound signals is then merged with the regional seismic arrivals to produce a joint event location. The yields of the events are estimated from seismic and infrasonic observations. Seismic waveform envelope method (Pasyanos et al., 2012) including the free surface effect (Pasyanos and Ford, 2015) is applied to regional seismic signals. Waveform inversion method (Kim and Rodgers, 2016) is used for infrasound signals. A combination of the seismic and acoustic signals can provide insights on the energy partitioning and break the tradeoffs between the yield and the depth/height of explosions, resulting in a more robust estimation of event yield. The yield information from the different phenomenologies are combined through the use of likelihood functions.

  1. Estimation of the behavior factor of existing RC-MRF buildings

    NASA Astrophysics Data System (ADS)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  2. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zucca, J J; Walter, W R; Rodgers, A J

    2008-11-19

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring andmore » seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags contemporary research. Current monitoring practice is to use relatively simple Earth models that generally afford analytical prediction of seismic observables (see Examples of Current Monitoring Practice below). Empirical relationships or corrections to predictions are often used to account for unmodeled phenomena, such as the generation of S-waves from explosions or the effect of 3-dimensional Earth structure on wave propagation. This approach produces fast and accurate predictions in areas where empirical observations are available. However, accuracy may diminish away from empirical data. Further, much of the physics is wrapped into an empirical relationship or correction, which limits the ability to fully understand the physical processes underlying the seismic observation. Every generation of seismology researchers works toward quantitative results, with leaders who are active at or near the forefront of what has been computationally possible. While recognizing that only a 3-dimensional model can capture the full physics of seismic wave generation and propagation in the Earth, computational seismology has, until recently, been limited to simplifying model parameterizations (e.g. 1D Earth models) that lead to efficient algorithms. What is different today is the fact that the largest and fastest machines are at last capable of evaluating the effects of generalized 3D Earth structure, at levels of detail that improve significantly over past efforts, with potentially wide application. Advances in numerical methods to compute travel times and complete seismograms for 3D models are enabling new ways to interpret available data. This includes algorithms such as the Fast Marching Method (Rawlison and Sambridge, 2004) for travel time calculations and full waveform methods such as the spectral element method (SEM; Komatitsch et al., 2002, Tromp et al., 2005), higher order Galerkin methods (Kaser and Dumbser, 2006; Dumbser and Kaser, 2006) and advances in more traditional Cartesian finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less

  3. The source parameters of 2013 Mw6.6 Lushan earthquake constrained with the restored local clipped seismic waveforms

    NASA Astrophysics Data System (ADS)

    Hao, J.; Zhang, J. H.; Yao, Z. X.

    2017-12-01

    We developed a method to restore the clipped seismic waveforms near epicenter using projection onto convex sets method (Zhang et al, 2016). This method was applied to rescue the local clipped waveforms of 2013 Mw 6.6 Lushan earthquake. We restored 88 out of 93 clipped waveforms of 38 broadband seismic stations of China Earthquake Networks (CEN). The epicenter distance of the nearest station to the epicenter that we can faithfully restore is only about 32 km. In order to investigate if the source parameters of earthquake could be determined exactly with the restored data, restored waveforms are utilized to get the mechanism of Lushan earthquake. We apply the generalized reflection-transmission coefficient matrix method to calculate the synthetic seismic records and simulated annealing method in inversion (Yao and Harkrider, 1983; Hao et al., 2012). We select 5 stations of CEN with the epicenter distance about 200km whose records aren't clipped and three-component velocity records are used. The result shows the strike, dip and rake angles of Lushan earthquake are 200o, 51o and 87o respectively, hereinafter "standard result". Then the clipped and restored seismic waveforms are applied respectively. The strike, dip and rake angles of clipped seismic waveforms are 184o, 53o and 72o respectively. The largest misfit of angle is 16o. In contrast, the strike, dip and rake angles of restored seismic waveforms are 198o, 51o and 87o respectively. It is very close to the "standard result". We also study the rupture history of Lushan earthquake constrained with the restored local broadband and teleseismic waves based on finite fault method (Hao et al., 2013). The result consists with that constrained with the strong motion and teleseismic waves (Hao et al., 2013), especially the location of the patch with larger slip. In real-time seismology, determining the source parameters as soon as possible is important. This method will help us to determine the mechanism of earthquake using the local clipped waveforms. Strong motion stations in China don't have good coverage at present. This method will help us to investigate the rupture history of large earthquake in China using the local clipped data of broadband stations.

  4. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  5. Gas Reservoir Identification Basing on Deep Learning of Seismic-print Characteristics

    NASA Astrophysics Data System (ADS)

    Cao, J.; Wu, S.; He, X.

    2016-12-01

    Reservoir identification based on seismic data analysis is the core task in oil and gas geophysical exploration. The essence of reservoir identification is to identify the properties of rock pore fluid. We developed a novel gas reservoir identification method named seismic-print analysis by imitation of the vocal-print analysis techniques in speaker identification. The term "seismic-print" is referred to the characteristics of the seismic waveform which can identify determinedly the property of the geological objectives, for instance, a nature gas reservoir. Seismic-print can be characterized by one or a few parameters named as seismic-print parameters. It has been proven that gas reservoirs are of characteristics of negative 1-order cepstrum coefficient anomaly and Positive 2-order cepstrum coefficient anomaly, concurrently. The method is valid for sandstone gas reservoir, carbonate reservoir and shale gas reservoirs, and the accuracy rate may reach up to 90%. There are two main problems to deal with in the application of seismic-print analysis method. One is to identify the "ripple" of a reservoir on the seismogram, and another is to construct the mapping relationship between the seismic-print and the gas reservoirs. Deep learning developed in recent years is of the ability to reveal the complex non-linear relationship between the attribute and the data, and of ability to extract automatically the features of the objective from the data. Thus, deep learning could been used to deal with these two problems. There are lots of algorithms to carry out deep learning. The algorithms can be roughly divided into two categories: Belief Networks Network (DBNs) and Convolutional Neural Network (CNN). DBNs is a probabilistic generative model, which can establish a joint distribution of the observed data and tags. CNN is a feedforward neural network, which can be used to extract the 2D structure feature of the input data. Both DBNs and CNN can be used to deal with seismic data. We use an improved DBNs to identify carbonate rocks from log data, the accuracy rate can reach up to 83%. DBNs is used to deal with seismic waveform data, more information is obtained. The work was supported by NSFC under grant No. 41430323 and No. 41274128, and State Key Lab. of Oil and Gas Reservoir Geology and Exploration.

  6. The instrumental seismicity of the Barents and Kara sea region: relocated event catalog from early twentieth century to 1989

    NASA Astrophysics Data System (ADS)

    Morozov, Alexey Nikolaevich; Vaganova, Natalya V.; Asming, Vladimir E.; Konechnaya, Yana V.; Evtyugina, Zinaida A.

    2018-05-01

    We have relocated seismic events registered within the Barents and Kara sea region from early twentieth century to 1989 with a view to creating a relocated catalog. For the relocation, we collected all available seismic bulletins from the global network using data from the ISC Bulletin (International Seismological Centre), ISC-GEM project (International Seismological Centre-Global Earthquake Model), EuroSeismos project, and by Soviet seismic stations from Geophysical Survey of the Russian Academy of Sciences. The location was performed by applying a modified method of generalized beamforming. We have considered several travel time models and selected one with the best location accuracy for ground truth events. Verification of the modified method and selection of the travel time model were performed using data on four nuclear explosions that occurred in the area of the Novaya Zemlya Archipelago and in the north of the European part of Russia. The modified method and the Barents travel time model provide sufficient accuracy for event location in the region. The relocation procedure was applied to 31 of 36 seismic events registered within the Barents and Kara sea region.

  7. Application of Gumbel I and Monte Carlo methods to assess seismic hazard in and around Pakistan

    NASA Astrophysics Data System (ADS)

    Rehman, Khaista; Burton, Paul W.; Weatherill, Graeme A.

    2018-05-01

    A proper assessment of seismic hazard is of considerable importance in order to achieve suitable building construction criteria. This paper presents probabilistic seismic hazard assessment in and around Pakistan (23° N-39° N; 59° E-80° E) in terms of peak ground acceleration (PGA). Ground motion is calculated in terms of PGA for a return period of 475 years using a seismogenic-free zone method of Gumbel's first asymptotic distribution of extreme values and Monte Carlo simulation. Appropriate attenuation relations of universal and local types have been used in this study. The results show that for many parts of Pakistan, the expected seismic hazard is relatively comparable with the level specified in the existing PGA maps.

  8. Methods for use in detecting seismic waves in a borehole

    DOEpatents

    West, Phillip B.; Fincke, James R.; Reed, Teddy R.

    2007-02-20

    The invention provides methods and apparatus for detecting seismic waves propagating through a subterranean formation surrounding a borehole. In a first embodiment, a sensor module uses the rotation of bogey wheels to extend and retract a sensor package for selective contact and magnetic coupling to casing lining the borehole. In a second embodiment, a sensor module is magnetically coupled to the casing wall during its travel and dragged therealong while maintaining contact therewith. In a third embodiment, a sensor module is interfaced with the borehole environment to detect seismic waves using coupling through liquid in the borehole. Two or more of the above embodiments may be combined within a single sensor array to provide a resulting seismic survey combining the optimum of the outputs of each embodiment into a single data set.

  9. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  10. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field.

    PubMed

    Holtzman, Benjamin K; Paté, Arthur; Paisley, John; Waldhauser, Felix; Repetto, Douglas

    2018-05-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < M L < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity.

  11. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  12. The discrimination of man-made explosions from earthquakes using seismo-acoustic analysis in the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Che, Il-Young; Jeon, Jeong-Soo

    2010-05-01

    Korea Institute of Geoscience and Mineral Resources (KIGAM) operates an infrasound network consisting of seven seismo-acoustic arrays in South Korea. Development of the arrays began in 1999, partially in collaboration with Southern Methodist University, with the goal of detecting distant infrasound signals from natural and anthropogenic phenomena in and around the Korean Peninsula. The main operational purpose of this network is to discriminate man-made seismic events from seismicity including thousands of seismic events per year in the region. The man-made seismic events are major cause of error in estimating the natural seismicity, especially where the seismic activity is weak or moderate such as in the Korean Peninsula. In order to discriminate the man-made explosions from earthquakes, we have applied the seismo-acoustic analysis associating seismic and infrasonic signals generated from surface explosion. The observations of infrasound at multiple arrays made it possible to discriminate surface explosion, because small or moderate size earthquake is not sufficient to generate infrasound. Till now we have annually discriminated hundreds of seismic events in seismological catalog as surface explosions by the seismo-acoustic analysis. Besides of the surface explosions, the network also detected infrasound signals from other sources, such as bolide, typhoons, rocket launches, and underground nuclear test occurred in and around the Korean Peninsula. In this study, ten years of seismo-acoustic data are reviewed with recent infrasonic detection algorithm and association method that finally linked to the seismic monitoring system of the KIGAM to increase the detection rate of surface explosions. We present the long-term results of seismo-acoustic analysis, the detection capability of the multiple arrays, and implications for seismic source location. Since the seismo-acoustic analysis is proved as a definite method to discriminate surface explosion, the analysis will be continuously used for estimating natural seismicity and understanding infrasonic sources.

  13. Research on Influencing Factors and Generalized Power of Synthetic Artificial Seismic Wave

    NASA Astrophysics Data System (ADS)

    Jiang, Yanpei

    2018-05-01

    Start your abstract here… In this paper, according to the trigonometric series method, the author adopts different envelope functions and the acceleration design spectrum in Seismic Code For Urban Bridge Design to simulate the seismic acceleration time history which meets the engineering accuracy requirements by modifying and iterating the initial wave. Spectral analysis is carried out to find out the the distribution law of the changing frequencies of the energy of seismic time history and to determine the main factors that affect the acceleration amplitude spectrum and energy spectrum density. The generalized power formula of seismic time history is derived from the discrete energy integral formula and the author studied the changing characteristics of generalized power of the seismic time history under different envelop functions. Examples are analyzed to illustrate that generalized power can measure the seismic performance of bridges.

  14. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  15. Seismic analysis of the frame structure reformed by cutting off column and jacking based on stiffness ratio

    NASA Astrophysics Data System (ADS)

    Zhao, J. K.; Xu, X. S.

    2017-11-01

    The cutting off column and jacking technology is a method for increasing story height, which has been widely used and paid much attention in engineering. The stiffness will be changed after the process of cutting off column and jacking, which directly affects the overall seismic performance. It is usually necessary to take seismic strengthening measures to enhance the stiffness. A five story frame structure jacking project in Jinan High-tech Zone was taken as an example, and three finite element models were established which contains the frame model before lifting, after lifting and after strengthening. Based on the stiffness, the dynamic time-history analysis was carried out to research its seismic performance under the EL-Centro seismic wave, the Taft seismic wave and the Tianjin artificial seismic wave. The research can provide some guidance for the design and construction of the entire jack lifting structure.

  16. Deghosting based on the transmission matrix method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong

    2017-12-01

    As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.

  17. Seismic activity preceding the 2016 Kumamoto earthquakes: Multiple approaches to recognizing possible precursors

    NASA Astrophysics Data System (ADS)

    Nanjo, K.; Izutsu, J.; Orihara, Y.; Furuse, N.; Togo, S.; Nitta, H.; Okada, T.; Tanaka, R.; Kamogawa, M.; Nagao, T.

    2016-12-01

    We show the first results of recognizing seismic patterns as possible precursory episodes to the 2016 Kumamoto earthquakes, using existing four different methods: b-value method (e.g., Schorlemmer and Wiemer, 2005; Nanjo et al., 2012), two kinds of seismic quiescence evaluation methods (RTM-algorithm, Nagao et al., 2011; Z-value method, Wiemer and Wyss, 1994), and foreshock seismic density analysis based on Lippiello et al. (2012). We used the earthquake catalog maintained by the Japan Meteorological Agency (JMA). To ensure data quality, we performed catalog completeness check as a pre-processing step of individual analyses. Our finding indicates the methods we adopted do not allow the Kumamoto earthquakes to be predicted exactly. However, we found that the spatial extent of possible precursory patterns differs from one method to the other and ranges from local scales (typically asperity size), to regional scales (e.g., 2° × 3° around the source zone). The earthquakes are preceded by periods of pronounced anomalies, which lasted decade scales (e.g., 20 years or longer) to yearly scales (e.g., 1 2 years). Our results demonstrate that combination of multiple methods detects different signals prior to the Kumamoto earthquakes with more considerable reliability than if measured by single method. This strongly suggests great potential to reduce the possible future sites of earthquakes relative to long-term seismic hazard assessment. This study was partly supported by MEXT under its Earthquake and Volcano Hazards Observation and Research Program and Grant-in-Aid for Scientific Research (C), No. 26350483, 2014-2017, by Chubu University under the Collaboration Research Program of IDEAS, IDEAS201614, and by Tokai University under Project Resarch of IORD. A part of this presentation is given in Nanjo et al. (2016, submitted).

  18. Studies of the Correlation Between Ionospheric Anomalies and Seismic Activities in the Indian Subcontinent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasmal, S.; Chakrabarti, S. K.; S. N. Bose National Centre for Basic Sciences, JD Block, Salt-Lake Kolkata-70098

    2010-10-20

    The VLF (Very Low Frequency) signals are long thought to give away important information about the Lithosphere-Ionosphere coupling. It is recently established that the ionosphere may be perturbed due to seismic activities. The effects of this perturbation can be detected through the VLF wave amplitude. There are several methods to find this correlations and these methods can be used for the prediction of these seismic events. In this paper, first we present a brief history of the use of VLF propagation method for the study of seismo-ionospheric correlations. Then we present different methods proposed by us to find out themore » seismo-ionospheric correlations. At the Indian Centre for Space Physics, Kolkata we have been monitoring the VTX station at Vijayanarayanam from 2002. In the initial stage, we received 17 kHz signal and latter we received 18.2 kHz signal. In this paper, first we present the results for the 17 kHz signal during Sumatra earthquake in 2004 obtained from the terminator time analysis method. Then we present much detailed and statistical analysis using some new methods and present the results for 18.2 kHz signal. In order to establish the correlation between the ionospheric activities and the earthquakes, we need to understand what are the reference signals throughout the year. We present the result of the sunrise and sunset terminators for the 18.2 kHz signal as a function of the day of the year for a period of four years, viz, 2005 to 2008 when the solar activity was very low. In this case, the signal would primarily be affected by the Sun due to normal sunrise and sunset effects. Any deviation from this standardized calibration curve would point to influences by terrestrial (such as earthquakes) and extra-terrestrial (such as solar activities and other high energy phenomena). We present examples of deviations which occur in a period of sixteen months and show that the correlations with seismic events is significant and typically the highest deviation in terminator shift takes place up to a couple of days prior to the seismic event. We introduce a new method where we find the effects of the seismic activities on D-layer preparation time (DLPT) and the D-layer disappearance time (DLDT). We identify those days in which DLPT and DLDT exhibit deviations from the average value and we correlate those days with seismic events. Separately, we compute the energy release by the earthquakes and using this, we compute the total energy released locally from distant earthquakes and find correlations of the deviations with them. In this case also we find pre-cursors a few days before the seismic events. In a third approach, we consider the nighttime fluctuation method (differently quantified than the conventional way). We analyzed the nighttime data for the year 2007 to check the correlation between the night time fluctuation of the signal amplitude and the seismic events. Using the statistical method for all the events of the year and for the individual individual earthquakes (Magnitude > 5) we found that the night time signal amplitude becomes very high on three days prior to the seismic events.« less

  19. Reflection seismic imaging in the volcanic area of the geothermal field Wayang Windu, Indonesia

    NASA Astrophysics Data System (ADS)

    Polom, Ulrich; Wiyono, Wiyono; Pramono, Bambang; Krawczyk, CharLotte M.

    2014-05-01

    Reflection seismic exploration in volcanic areas is still a scientific challenge and requires major efforts to develop imaging workflows capable of an economic utilization, e.g., for geothermal exploration. The SESaR (Seismic Exploration and Safety Risk study for decentral geothermal plants in Indonesia) project therefore tackles still not well resolved issues concerning wave propagation or energy absorption in areas covered by pyroclastic sediments using both active P-wave and S-wave seismics. Site-specific exploration procedures were tested in different tectonic and lithological regimes to compare imaging conditions. Based on the results of a small-scale, active seismic pre-site survey in the area of the Wayang Windu geothermal field in November 2012, an additional medium-scale active seismic experiment using P-waves was carried out in August 2013. The latter experiment was designed to investigate local changes of seismic subsurface response, to expand the knowledge about capabilities of the vibroseis method for seismic surveying in regions covered by pyroclastic material, and to achieve higher depth penetration. Thus, for the first time in the Wayang Windu geothermal area, a powerful, hydraulically driven seismic mini-vibrator device of 27 kN peak force (LIAG's mini-vibrator MHV2.7) was used as seismic source instead of the weaker hammer blow applied in former field surveys. Aiming at acquiring parameter test and production data southeast of the Wayang Windu geothermal power plant, a 48-channel GEODE recording instrument of the Badan Geologi was used in a high-resolution configuration, with receiver group intervals of 5 m and source intervals of 10 m. Thereby, the LIAG field crew, Star Energy, GFZ Potsdam, and ITB Bandung acquired a nearly 600 m long profile. In general, we observe the successful applicability of the vibroseis method for such a difficult seismic acquisition environment. Taking into account the local conditions at Wayang Windu, the method is superior to the common seismic explosive source techniques, both with respect to production rate as well as resolution and data quality. Source signal frequencies of 20-80 Hz are most efficient for the attempted depth penetration, even though influenced by the dry subsurface conditions during the experiment. Depth penetration ranges between 0.5-1 km. Based on these new experimental data, processing workflows can be tested the first time for adapted imaging strategies. This will not only allow to focus on larger exploration depths covering the geothermal reservoir at the Wayang Windu power plant site itself, but also opens the possibility to transfer the lessons learned to other sites.

  20. Tests of remote aftershock triggering by small mainshocks using Taiwan's earthquake catalog

    NASA Astrophysics Data System (ADS)

    Peng, W.; Toda, S.

    2014-12-01

    To understand earthquake interaction and forecast time-dependent seismic hazard, it is essential to evaluate which stress transfer, static or dynamic, plays a major role to trigger aftershocks and subsequent mainshocks. Felzer and Brodsky focused on small mainshocks (2≤M<3) and their aftershocks, and then argued that only dynamic stress change brings earthquake-to-earthquake triggering, whereas Richards-Dingers et al. (2010) claimed that those selected small mainshock-aftershock pairs were not earthquake-to-earthquake triggering but simultaneous occurrence of independent aftershocks following a larger earthquake or during a significant swarm sequence. We test those hypotheses using Taiwan's earthquake catalog by taking the advantage of lacking any larger event and the absence of significant seismic swarm typically seen with active volcano. Using Felzer and Brodsky's method and their standard parameters, we only found 14 mainshock-aftershock pairs occurred within 20 km distance in Taiwan's catalog from 1994 to 2010. Although Taiwan's catalog has similar number of earthquakes as California's, the number of pairs is about 10% of the California catalog. It may indicate the effect of no large earthquakes and no significant seismic swarm in the catalog. To fully understand the properties in the Taiwan's catalog, we loosened the screening parameters to earn more pairs and then found a linear aftershock density with a power law decay of -1.12±0.38 that is very similar to the one in Felzer and Brodsky. However, none of those mainshock-aftershock pairs were associated with a M7 rupture event or M6 events. To find what mechanism controlled the aftershock density triggered by small mainshocks in Taiwan, we randomized earthquake magnitude and location. We then found that those density decay in a short time period is more like a randomized behavior than mainshock-aftershock triggering. Moreover, 5 out of 6 pairs were found in a swarm-like temporal seismicity rate increase. They locate mostly in high geothermal gradient areas, which are probably triggered by a small-scale aseismic process. Thus it rather supports the argument of Richards-Dingers et al. in which dynamic triggering by small mainshock is untenable.

  1. A seismic optimization procedure for reinforced concrete framed buildings based on eigenfrequency optimization

    NASA Astrophysics Data System (ADS)

    Arroyo, Orlando; Gutiérrez, Sergio

    2017-07-01

    Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.

  2. 1D Seismic reflection technique to increase depth information in surface seismic investigations

    NASA Astrophysics Data System (ADS)

    Camilletti, Stefano; Fiera, Francesco; Umberto Pacini, Lando; Perini, Massimiliano; Prosperi, Andrea

    2017-04-01

    1D seismic methods, such as MASW Re.Mi. and HVSR, have been extensively used in engineering investigations, bedrock research, Vs profile and to some extent for hydrologic applications, during the past 20 years. Recent advances in equipment, sound sources and computer interpretation techniques, make 1D seismic methods highly effective in shallow subsoil modeling. Classical 1D seismic surveys allows economical collection of subsurface data however they fail to return accurate information for depths greater than 50 meters. Using a particular acquisition technique it is possible to collect data that can be quickly processed through reflection technique in order to obtain more accurate velocity information in depth. Furthermore, data processing returns a narrow stratigraphic section, alongside the 1D velocity model, where lithological boundaries are represented. This work will show how collect a single-CMP to determine: (1) depth of bedrock; (2) gravel layers in clayey domains; (3) accurate Vs profile. Seismic traces was processed by means a new software developed in collaboration with SARA electronics instruments S.r.l company, Perugia - ITALY. This software has the great advantage of being able to be used directly in the field in order to reduce the times elapsing between acquisition and processing.

  3. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  4. Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Queen, John H.

    2016-05-09

    Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both surface seismic and vertical seismic profile (VSP) methods. We adapted these methods to the unique conditions encountered in Enhanced Geothermal Systems (EGS) creation. These conditions include geological environments with volcanic cover, highly altered rocks, severe structure, extreme near surface velocity contrasts and lack of distinct velocity contrasts at depth. One of the objectives was the development of methods for identifying more appropriate seismic acquisition parametersmore » for overcoming problems associated with these geological factors. Because temperatures up to 300º C are often encountered in these systems, another objective was the testing of VSP borehole tools capable of operating at depths in excess of 1,000 m and at temperatures in excess of 200º C. A final objective was the development of new processing and interpretation techniques based on scattering and time-frequency analysis, as well as the application of modern seismic migration imaging algorithms to seismic data acquired over geothermal areas. The use of surface seismic reflection data at Brady's Hot Springs was found useful in building a geological model, but only when combined with other extensive geological and geophysical data. The use of fine source and geophone spacing was critical in producing useful images. The surface seismic reflection data gave no information about the internal structure (extent, thickness and filling) of faults and fractures, and modeling suggests that they are unlikely to do so. Time-frequency analysis was applied to these data, but was not found to be significantly useful in their interpretation. Modeling does indicate that VSP and other seismic methods with sensors located at depth in wells will be the most effective seismic tools for getting information on the internal structure of faults and fractures in support of fluid flow pathway management and EGS treatment. Scattered events similar to those expected from faults and fractures are seen in the VSP reported here. Unfortunately, the source offset and well depth coverage do not allow for detailed analysis of these events. This limited coverage also precluded the use of advanced migration and imaging algorithms. More extensive acquisition is needed to support fault and fracture characterization in the geothermal reservoir at Brady's Hot Springs. The VSP was effective in generating interval velocity estimates over the depths covered by the array. Upgoing reflection events are also visible in the VSP results at locations corresponding to reflection events in the surface seismic. Overall, the high temperature rated fiber optic sensors used in the VSP produced useful results. Modeling has been found useful in the interpretation of both surface reflection seismic and VSP data. It has helped identify possible near surface scattering in the surface seismic data. It has highlighted potential scattering events from deeper faults in the VSP data. Inclusion of more detailed fault and fracture specific stiffness parameters are needed to fully interpret fault and fracture scattered events for flow properties (Pyrak-Nolte and Morris, 2000, Zhu and Snieder, 2002). Shear wave methods were applied in both the surface seismic reflection and VSP work. They were not found to be effective in the Brady's Hot Springs area. This was due to the extreme attenuation of shear waves in the near surface at Brady's. This does not imply that they will be ineffective in general. In geothermal areas where good shear waves can be recorded, modeling suggests they should be very useful for characterizing faults and fractures.« less

  5. Full waveform seismic AVAZ signatures of anisotropic shales by integrated rock physics and the reflectivity method

    NASA Astrophysics Data System (ADS)

    Liu, Xiwu; Guo, Zhiqi; Han, Xu

    2018-06-01

    A set of parallel vertical fractures embedded in a vertically transverse isotropy (VTI) background leads to orthorhombic anisotropy and corresponding azimuthal seismic responses. We conducted seismic modeling of full waveform amplitude variations versus azimuth (AVAZ) responses of anisotropic shale by integrating a rock physics model and a reflectivity method. The results indicate that the azimuthal variation of P-wave velocity tends to be more complicated for orthorhombic medium compared to the horizontally transverse isotropy (HTI) case, especially at high polar angles. Correspondingly, for the HTI layer in the theoretical model, the short axis of the azimuthal PP amplitudes at the top interface is parallel to the fracture strike, while the long axis at the bottom reflection directs the fracture strike. In contrast, the orthorhombic layer in the theoretical model shows distinct AVAZ responses in terms of PP reflections. Nevertheless, the azimuthal signatures of the R- and T-components of the mode-converted PS reflections show similar AVAZ features for the HTI and orthorhombic layers, which may imply that the PS responses are dominated by fractures. For the application to real data, a seismic-well tie based on upscaled data and a reflectivity method illustrate good agreement between the reference layers and the corresponding reflected events. Finally, the full waveform seismic AVAZ responses of the Longmaxi shale formation are computed for the cases of HTI and orthorhombic anisotropy for comparison. For the two cases, the azimuthal features represent differences mainly in amplitudes, while slightly in the phases of the reflected waveforms. Azimuth variations in the PP reflections from the reference layers show distinct behaviors for the HTI and orthorhombic cases, while the mode-converted PS reflections in terms of the R- and T-components show little differences in azimuthal features. It may suggest that the behaviors of the PS waves are dominated by vertically aligned fractures. This work provides further insight into the azimuthal seismic response of orthorhombic shales. The proposed method may help to improve the seismic-well tie, seismic interpretation, and inversion results using an azimuth anisotropy dataset.

  6. Structural building screening and evaluation

    NASA Astrophysics Data System (ADS)

    Kurniawandy, Alex; Nakazawa, Shoji; Hendry, Andy; Ridwan, Firdaus, Rahmatul

    2017-10-01

    An earthquake is a disaster that can be harmful to the community, such as financial loss and also dead injuries. Pekanbaru is a city that located in the middle of Sumatera Island. Even though the city of Pekanbaru is a city that rarely occurs earthquake, but Pekanbaru has ever felt the impact of the big earthquake that occurred in West Sumatera on September 2009. As we know, Indonesia located between Eurasia plate, Pacific plate, and Indo-Australian plate. Particularly the Sumatera Island, It has the Semangko fault or the great Sumatra fault along the island from north to south due to the shift of Eurasia and Indo-Australian Plates. An earthquake is not killing people but the building around the people that could be killing them. The failure of the building can be early prevented by doing an evaluation. In this research, the methods of evaluation have used a guideline for the Federal Emergency Management Agency (FEMA) P-154 and Applied Technology Council (ATC) 40. FEMA P-154 is a rapid visual screening of buildings for potential seismic hazards and ATC-40 is seismic evaluation and retrofit of Concrete Buildings. ATC-40 is a more complex evaluation rather than FEMA P-154. The samples to be evaluated are taken in the surroundings of Universitas Riau facility in Pekanbaru. There are four buildings as case study such as the rent student building, the building of mathematics and natural science faculty, the building teacher training and education faculty and the buildings in the faculty of Social political sciences. Vulnerability for every building facing an earthquake is different, this is depending on structural and non-structural components of the building. Among all of the samples, only the building of mathematics and the natural science faculty is in critical condition according to the FEMA P-154 evaluation. Furthermore, the results of evaluation using ATC-40 for the teacher training building are in damage control conditions, despite the other three buildings are in immediate occupancy conditions.

  7. Recent Seismicity in Texas and Research Design and Progress of the TexNet-CISR Collaboration

    NASA Astrophysics Data System (ADS)

    Hennings, P.; Savvaidis, A.; Rathje, E.; Olson, J. E.; DeShon, H. R.; Datta-Gupta, A.; Eichhubl, P.; Nicot, J. P.; Kahlor, L. A.

    2017-12-01

    The recent increase in the rate of seismicity in Texas has prompted the establishment of an interdisciplinary, interinstitutional collaboration led by the Texas Bureau of Economic Geology which includes the TexNet Seismic Monitoring and Research project as funded by The State of Texas (roughly 2/3rds of our funding) and the industry-funded Center for Integrated Seismicity Research (CISR) (1/3 of funding). TexNet is monitoring and cataloging seismicity across Texas using a new backbone seismic network, investigating site-specific earthquake sequences by deploying temporary seismic monitoring stations, and conducting reservoir modeling studies. CISR expands TexNet research into the interdisciplinary realm to more thoroughly study the factors that contribute to seismicity, characterize the associated hazard and risk, develop strategies for mitigation and management, and develop methods of effective communication for all stakeholders. The TexNet-CISR research portfolio has 6 themes: seismicity monitoring, seismology, geologic and hydrologic description, geomechanics and reservoir modeling, seismic hazard and risk assessment, and seismic risk social science. Twenty+ specific research projects span and connect these themes. We will provide a synopsis of research progress including recent seismicity trends in Texas; Fort Worth Basin integrated studies including geological modeling and fault characterization, fluid injection data syntheses, and reservoir and geomechanical modeling; regional ground shaking characterization and mapping, infrastructure vulnerability assessment; and social science topics of public perception and information seeking behavior.

  8. Seismic site survey investigations in urban environments: The case of the underground metro project in Copenhagen, Denmark.

    NASA Astrophysics Data System (ADS)

    Martínez, K.; Mendoza, J. A.; Colberg-Larsen, J.; Ploug, C.

    2009-05-01

    Near surface geophysics applications are gaining more widespread use in geotechnical and engineering projects. The development of data acquisition, processing tools and interpretation methods have optimized survey time, reduced logistics costs and increase results reliability of seismic surveys during the last decades. However, the use of wide-scale geophysical methods under urban environments continues to face great challenges due to multiple noise sources and obstacles inherent to cities. A seismic pre-investigation was conducted to investigate the feasibility of using seismic methods to obtain information about the subsurface layer locations and media properties in Copenhagen. Such information is needed for hydrological, geotechnical and groundwater modeling related to the Cityringen underground metro project. The pre-investigation objectives were to validate methods in an urban environment and optimize field survey procedures, processing and interpretation methods in urban settings in the event of further seismic investigations. The geological setting at the survey site is characterized by several interlaced layers of clay, till and sand. These layers are found unevenly distributed throughout the city and present varying thickness, overlaying several different unit types of limestone at shallow depths. Specific results objectives were to map the bedrock surface, ascertain a structural geological framework and investigate bedrock media properties relevant to the construction design. The seismic test consisted of a combined seismic reflection and refraction analyses of a profile line conducted along an approximately 1400 m section in the northern part of Copenhagen, along the projected metro city line. The data acquisition was carried out using a 192 channels array, receiver groups with 5 m spacing and a Vibroseis as a source at 10 m spacing. Complementarily, six vertical seismic profiles (VSP) were performed at boreholes located along the line. The reflection data underwent standard interpretation and the refraction included wavepath Eikonal traveltime tomography. The reflection results indicate the presence of horizontal reflectors with discontinuities likely related to deep lying structural features in deeper lying chalk layers. The refraction interpretation allowed the identification of the upper limestone surface, relevant to map for tunneling design. The VSP provided additional information regarding limestone quality and provided correlation data for improved refraction interpretation. In general, the pre-investigation results demonstrated that it is possible to image the limestone surface using the seismic method. The satisfactory results lead to the implementation of a 15 km survey planned during the spring 2009. The survey will combine reflection, refraction, walkaway-VSP and electrical resistivity tomography (ERT). The authors wish to acknowledge Metroselskabet I/S for permission in presenting the preliminary results and the Cityringen Joint Venture partners Arup and Systra.

  9. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  10. 3D shallow velocity model in the area of Pozzo Pitarrone, NE flank of Mt. Etna Volcano, by using SPAC array method.

    NASA Astrophysics Data System (ADS)

    Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio; Contrafatto, Danilo; Galluzzo, Danilo; Rapisarda, Salvatore

    2016-04-01

    In volcanic environment the propagation of seismic signals through the shallowest layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Therefore tracing a seismic ray from the recording site back to the source is a complex matter, with obvious implications for the source location. For this reason the knowledge of the shallow velocity structure may improve the location of shallow volcano-tectonic earthquakes and volcanic tremor, thus contributing to improve the monitoring of volcanic activity. This work focuses on the analysis of seismic noise and volcanic tremor recorded in 2014 by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna. Several methods permit a reliable estimation of the shear wave velocity in the shallowest layers through the analysis of stationary random wavefield like the seismic noise. We have applied the single station HVSR method and SPAC array method to seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. We also applied the Beam Forming array method in the 0.5 Hz - 4 Hz frequency range to both seismic noise and volcanic tremor. The apparent velocity of coherent tremor signals fits quite well the dispersion curve estimated from the analysis of seismic noise, thus giving a further constrain on the estimated velocity model. Moreover, taking advantage of a borehole station installed at 130 m depth in the same area of the array, we obtained a direct estimate of the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave visible in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, in good agreement with the shear wave velocity found from the analysis of seismic noise. To better constrain the inversion we used the HVSR computed at each array station, which also give a lateral extension to the final 3D velocity model. The obtained results indicate that site effects in the investigate area are quite homogeneous among the array stations.

  11. Study of blasting seismic effects of underground powerhouse of pumped storage project in granite condition

    NASA Astrophysics Data System (ADS)

    Wan, Sheng; Li, Hui

    2018-03-01

    Though the test of blasting vibration, the blasting seismic wave propagation laws in southern granite pumped storage power project are studied. Attenuation coefficient of seismic wave and factors coefficient are acquired by the method of least squares regression analysis according to Sadaovsky empirical formula, and the empirical formula of seismic wave is obtained. This paper mainly discusses on the test of blasting vibration and the procedure of calculation. Our practice might as well serve as a reference for similar projects to come.

  12. Improvements of Real Time First Motion Focal Mechanism and Noise Characteristics of New Sites at the Puerto Rico Seismic Network

    NASA Astrophysics Data System (ADS)

    Williams, D. M.; Lopez, A. M.; Huerfano, V.; Lugo, J.; Cancel, J.

    2011-12-01

    Seismic networks need quick and efficient ways to obtain information related to seismic events for the purposes of seismic activity monitoring, risk assessment, and scientific knowledge among others. As part of an IRIS summer internship program, two projects were performed to provide a tool for quick faulting mechanism and improve seismic data at the Puerto Rico Seismic Network (PRSN). First, a simple routine to obtain a focal mechanisms, the geometry of the fault, based on first motions was developed and implemented for data analysts routine operations at PRSN. The new tool provides the analyst a quick way to assess the probable faulting mechanism that occurred while performing the interactive earthquake location procedure. The focal mechanism is generated on-the-fly when data analysts pick P wave arrivals onsets and motions. Once first motions have been identified, an in-house PRSN utility is employed to obtain the double couple representation and later plotted using GMT's psmeca utility. Second, we addressed the issue of seismic noise related to thermal fluctuations inside seismic vaults. Seismic sites can be extremely noisy due to proximity to cultural activities and unattended thermal fluctuations inside sensor housings, thus resulting in skewed readings. In the past, seismologists have used different insulation techniques to reduce the amount of unwanted noise that a seismometers experience due to these thermal changes with items such as Styrofoam, and fiber glass among others. PRSN traditionally uses Styrofoam boxes to cover their seismic sensors, however, a proper procedure to test how these method compare to other new techniques has never been approached. The deficiency of properly testing these techniques in the Caribbean and especially Puerto Rico is that these thermal fluctuations still happen because of the intense sun and humidity. We conducted a test based on the methods employed by the IRIS Transportable Array, based on insulation by sand burial of the sensor. Two Guralps CMG-3T's connected to RefTek's 150 digitizers were used at PRSN's MPR site seismic vault to compare the two types of insulation. Two temperature loggers were placed along each seismic sensor for a period of one week to observe how much thermal fluctuations occur in each insulation method and then compared its capability for noise reduction due to thermal fluctuations. With only a single degree Celsius fluctuation inside the sand (compared to almost twice that value for the foam) the sensor buried in sand provided the best insulation for the seismic vault. In addition, the quality of the data was analyzed by comparing both sensors using PQLX. We show results of this analysis and also provide a site characteristic of new stations to be included in the daily earthquake location operations at the PRSN.

  13. Archaeological Graves Revealing By Means of Seismic-electric Effect

    NASA Astrophysics Data System (ADS)

    Boulytchov, A.

    [a4paper,12pt]article english Seismic-electric effect was applied in field to forecast subsurface archaeological cul- tural objects. A source of seismic waves were repeated blows of a heavy hammer or powerful signals of magnetostrictive installation. Main frequency used was 500 Hz. Passed a soil layer and reached a second boundary between upper clayey-sand sedi- ments and archaeological object, the seismic wave caused electromagnetic fields on the both boundaries what in general is due to dipole charge separation owe to an im- balance of streaming currents induced by the seismic wave on opposite sides of a boundary interface. According to theoretical works of Pride the electromagnetic field appears on a boundary between two layers with different physical properties in the time of seismic wave propagation. Electric responses of electromagnetic fields were measured on a surface by pair of grounded dipole antennas or by one pivot and a long wire antenna acting as a capacitive pickup. The arrival times of first series of responses correspond to the time of seismic wave propagation from a source to a boundary between soil and clayey-sand layers. The arrival times of second row of responses correspond to the time of seismic wave way from a source to a boundary of clayey-sand layer with the archaeological object. The method depths successfully investigated were between 0.5-10 m. Similar electromagnetic field on another type of geological structure was also revealed by Mikhailov et al., Massachusetts, but their signals registered from two frontiers were too faint and not evident in comparing with ours ones that occurred to be perfect and clear. Seismic-electric method field experi- ments were successfully provided for the first time on archaeological objects.

  14. Effect of strong elastic contrasts on the propagation of seismic wave in hard-rock environments

    NASA Astrophysics Data System (ADS)

    Saleh, R.; Zheng, L.; Liu, Q.; Milkereit, B.

    2013-12-01

    Understanding the propagation of seismic waves in a presence of strong elastic contrasts, such as topography, tunnels and ore-bodies is still a challenge. Safety in mining is a major concern and seismic monitoring is the main tool here. For engineering purposes, amplitudes (peak particle velocity/acceleration) and travel times of seismic events (mostly blasts or microseismic events) are critical parameters that have to be determined at various locations in a mine. These parameters are useful in preparing risk maps or to better understand the process of spatial and temporal stress distributions in a mine. Simple constant velocity models used for monitoring studies in mining, cannot explain the observed complexities in scattered seismic waves. In hard-rock environments modeling of elastic seismic wavefield require detailed 3D petrophysical, infrastructure and topographical data to simulate the propagation of seismic wave with a frequencies up to few kilohertz. With the development of efficient numerical techniques, and parallel computation facilities, a solution for such a problem is achievable. In this study, the effects of strong elastic contrasts such as ore-bodies, rough topography and tunnels will be illustrated using 3D modeling method. The main tools here are finite difference code (SOFI3D)[1] that has been benchmarked for engineering studies, and spectral element code (SPECFEM) [2], which was, developed for global seismology problems. The modeling results show locally enhanced peak particle velocity due to presence of strong elastic contrast and topography in models. [1] Bohlen, T. Parallel 3-D viscoelastic finite difference seismic modeling. Computers & Geosciences 28 (2002) 887-899 [2] Komatitsch, D., and J. Tromp, Introduction to the spectral-element method for 3-D seismic wave propagation, Geophys. J. Int., 139, 806-822, 1999.

  15. Inferring crustal viscosity from seismic velocity: Application to the lower crust of Southern California

    NASA Astrophysics Data System (ADS)

    Shinevar, William J.; Behn, Mark D.; Hirth, Greg; Jagoutz, Oliver

    2018-07-01

    We investigate the role of composition on the viscosity of the lower crust through a joint inversion of seismic P-wave (Vp) and S-wave (Vs) velocities. We determine the efficacy of using seismic velocity to constrain viscosity, extending previous research demonstrating robust relationships between seismic velocity and crustal composition, as well as crustal composition and viscosity. First, we calculate equilibrium mineral assemblages and seismic velocities for a global compilation of crustal rocks at relevant pressures and temperatures. Second, we use a rheological mixing model that incorporates single-phase flow laws for major crust-forming minerals to calculate aggregate viscosity from predicted mineral assemblages. We find a robust correlation between crustal viscosity and Vp together with Vs in the α-quartz regime. Using seismic data, geodetic surface strain rates, and heat flow measurements from Southern California, our method predicts that lower crustal viscosity varies regionally by four orders of magnitude, and lower crustal stress varies by three orders of magnitude at 25 km depth. At least half of the total variability in stress can be attributed to composition, implying that regional lithology has a significant effect on lower crustal geodynamics. Finally, we use our method to predict the depth of the brittle-ductile transition and compare this to regional variations of the seismic-aseismic transition. The variations in the seismic-aseismic transition are not explained by the variations in our model rheology inferred from the geophysical observations. Thus, we conclude that fabric development, in conjunction with compositional variations (i.e., quartz and mica content), is required to explain the regional changes in the seismic-aseismic transition.

  16. Combining Real-time Seismic and Geodetic Data to Improve Rapid Earthquake Information

    NASA Astrophysics Data System (ADS)

    Murray, M. H.; Neuhauser, D. S.; Gee, L. S.; Dreger, D. S.; Basset, A.; Romanowicz, B.

    2002-12-01

    The Berkeley Seismological Laboratory operates seismic and geodetic stations in the San Francisco Bay area and northern California for earthquake and deformation monitoring. The seismic systems, part of the Berkeley Digital Seismic Network (BDSN), include strong motion and broadband sensors, and 24-bit dataloggers. The data from 20 GPS stations, part of the Bay Area Regional Deformation (BARD) network of more than 70 stations in northern California, are acquired in real-time. We have developed methods to acquire GPS data at 12 stations that are collocated with the seismic systems using the seismic dataloggers, which have large on-site data buffer and storage capabilities, merge it with the seismic data stream in MiniSeed format, and continuously stream both data types using reliable frame relay and/or radio modem telemetry. Currently, the seismic data are incorporated into the Rapid Earthquake Data Integration (REDI) project to provide notification of earthquake magnitude, location, moment tensor, and strong motion information for hazard mitigation and emergency response activities. The geodetic measurements can provide complementary constraints on earthquake faulting, including the location and extent of the rupture plane, unambiguous resolution of the nodal plane, and distribution of slip on the fault plane, which can be used, for example, to refine strong motion shake maps. We are developing methods to rapidly process the geodetic data to monitor transient deformation, such as coseismic station displacements, and for combining this information with the seismic observations to improve finite-fault characterization of large earthquakes. The GPS data are currently processed at hourly intervals with 2-cm precision in horizontal position, and we are beginning a pilot project in the Bay Area in collaboration with the California Spatial Reference Center to do epoch-by-epoch processing with greater precision.

  17. Detailed Velocity and Density models of the Cascadia Subduction Zone from Prestack Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Fortin, W.; Holbrook, W. S.; Mallick, S.; Everson, E. D.; Tobin, H. J.; Keranen, K. M.

    2014-12-01

    Understanding the geologic composition of the Cascadia Subduction Zone (CSZ) is critically important in assessing seismic hazards in the Pacific Northwest. Despite being a potential earthquake and tsunami threat to millions of people, key details of the structure and fault mechanisms remain poorly understood in the CSZ. In particular, the position and character of the subduction interface remains elusive due to its relative aseismicity and low seismic reflectivity, making imaging difficult for both passive and active source methods. Modern active-source reflection seismic data acquired as part of the COAST project in 2012 provide an opportunity to study the transition from the Cascadia basin, across the deformation front, and into the accretionary prism. Coupled with advances in seismic inversion methods, this new data allow us to produce detailed velocity models of the CSZ and accurate pre-stack depth migrations for studying geologic structure. While still computationally expensive, current computing clusters can perform seismic inversions at resolutions that match that of the seismic image itself. Here we present pre-stack full waveform inversions of the central seismic line of the COAST survey offshore Washington state. The resultant velocity model is produced by inversion at every CMP location, 6.25 m laterally, with vertical resolution of 0.2 times the dominant seismic frequency. We report a good average correlation value above 0.8 across the entire seismic line, determined by comparing synthetic gathers to the real pre-stack gathers. These detailed velocity models, both Vp and Vs, along with the density model, are a necessary step toward a detailed porosity cross section to be used to determine the role of fluids in the CSZ. Additionally, the P-velocity model is used to produce a pre-stack depth migration image of the CSZ.

  18. Reservoir Identification: Parameter Characterization or Feature Classification

    NASA Astrophysics Data System (ADS)

    Cao, J.

    2017-12-01

    The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601

  19. Identification of ground motion features for high-tech facility under far field seismic waves using wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Huang, Shieh-Kung; Loh, Chin-Hsiung; Chen, Chin-Tsun

    2016-04-01

    Seismic records collected from earthquake with large magnitude and far distance may contain long period seismic waves which have small amplitude but with dominant period up to 10 sec. For a general situation, the long period seismic waves will not endanger the safety of the structural system or cause any uncomfortable for human activity. On the contrary, for those far distant earthquakes, this type of seismic waves may cause a glitch or, furthermore, breakdown to some important equipments/facilities (such as the high-precision facilities in high-tech Fab) and eventually damage the interests of company if the amplitude becomes significant. The previous study showed that the ground motion features such as time-variant dominant frequencies extracted using moving window singular spectrum analysis (MWSSA) and amplitude characteristics of long-period waves identified from slope change of ground motion Arias Intensity can efficiently indicate the damage severity to the high-precision facilities. However, embedding a large hankel matrix to extract long period seismic waves make the MWSSA become a time-consumed process. In this study, the seismic ground motion data collected from broadband seismometer network located in Taiwan were used (with epicenter distance over 1000 km). To monitor the significant long-period waves, the low frequency components of these seismic ground motion data are extracted using wavelet packet transform (WPT) to obtain wavelet coefficients and the wavelet entropy of coefficients are used to identify the amplitude characteristics of long-period waves. The proposed method is a timesaving process compared to MWSSA and can be easily implemented for real-time detection. Comparison and discussion on this method among these different seismic events and the damage severity to the high-precision facilities in high-tech Fab is made.

  20. Seismic design verification of LMFBR structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-07-01

    The report provides an assessment of the seismic design verification procedures currently used for nuclear power plant structures, a comparison of dynamic test methods available, and conclusions and recommendations for future LMFB structures.

  1. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm for Alaska and the Himalaya.

    NASA Astrophysics Data System (ADS)

    Hibert, Clement; Malet, Jean-Philippe; Provost, Floriane; Michéa, David; Geertsema, Marten

    2017-04-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with volumes below one millions of cubic meters. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. We present here the preliminary results of the application of this processing chain in two contexts: i) In Himalaya with the data acquired between 2002 and 2005 by the Hi-Climb network; ii) In Alaska using data recorded by the permanent regional network and the USArray, which is currently being deployed in this region. The landslide seismic catalogues are compared to geomorphological catalogues in terms of number of events and dates when possible.

  2. Microseismic monitoring: a tool for reservoir characterization.

    NASA Astrophysics Data System (ADS)

    Shapiro, S. A.

    2011-12-01

    Characterization of fluid-transport properties of rocks is one of the most important, yet one of most challenging goals of reservoir geophysics. There are some fundamental difficulties related to using active seismic methods for estimating fluid mobility. However, it would be very attractive to have a possibility of exploring hydraulic properties of rocks using seismic methods because of their large penetration range and their high resolution. Microseismic monitoring of borehole fluid injections is exactly the tool to provide us with such a possibility. Stimulation of rocks by fluid injections belong to a standard development practice of hydrocarbon and geothermal reservoirs. Production of shale gas and of heavy oil, CO2 sequestrations, enhanced recovery of oil and of geothermal energy are branches that require broad applications of this technology. The fact that fluid injection causes seismicity has been well-established for several decades. Observations and data analyzes show that seismicity is triggered by different processes ranging from linear pore pressure diffusion to non-linear fluid impact onto rocks leading to their hydraulic fracturing and strong changes of their structure and permeability. Understanding and monitoring of fluid-induced seismicity is necessary for hydraulic characterization of reservoirs, for assessments of reservoir stimulation and for controlling related seismic hazard. This presentation provides an overview of several theoretical, numerical, laboratory and field studies of fluid-induced microseismicity, and it gives an introduction into the principles of seismicity-based reservoir characterization.

  3. New methods for engineering site characterization using reflection and surface wave seismic survey

    NASA Astrophysics Data System (ADS)

    Chaiprakaikeow, Susit

    This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.

  4. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  5. EMERALD: A Flexible Framework for Managing Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2010-12-01

    The seismological community is challenged by the vast quantity of new broadband seismic data provided by large-scale seismic arrays such as EarthScope’s USArray. While this bonanza of new data enables transformative scientific studies of the Earth’s interior, it also illuminates limitations in the methods used to prepare and preprocess those data. At a recent seismic data processing focus group workshop, many participants expressed the need for better systems to minimize the time and tedium spent on data preparation in order to increase the efficiency of scientific research. Another challenge related to data from all large-scale transportable seismic experiments is that there currently exists no system for discovering and tracking changes in station metadata. This critical information, such as station location, sensor orientation, instrument response, and clock timing data, may change over the life of an experiment and/or be subject to post-experiment correction. Yet nearly all researchers utilize metadata acquired with the downloaded data, even though subsequent metadata updates might alter or invalidate results produced with older metadata. A third long-standing issue for the seismic community is the lack of easily exchangeable seismic processing codes. This problem stems directly from the storage of seismic data as individual time series files, and the history of each researcher developing his or her preferred data file naming convention and directory organization. Because most processing codes rely on the underlying data organization structure, such codes are not easily exchanged between investigators. To address these issues, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The goal of the EMERALD project is to provide seismic researchers with a unified, user-friendly, extensible system for managing seismic event data, thereby increasing the efficiency of scientific enquiry. EMERALD stores seismic data and metadata in a state-of-the-art open source relational database (PostgreSQL), and can, on a timed basis or on demand, download the most recent metadata, compare it with previously acquired values, and alert the user to changes. The backend relational database is capable of easily storing and managing many millions of records. The extensible, plug-in architecture of the EMERALD system allows any researcher to contribute new visualization and processing methods written in any of 12 programming languages, and a central Internet-enabled repository for such methods provides users with the opportunity to download, use, and modify new processing methods on demand. EMERALD includes data acquisition tools allowing direct importation of seismic data, and also imports data from a number of existing seismic file formats. Pre-processed clean sets of data can be exported as standard sac files with user-defined file naming and directory organization, for use with existing processing codes. The EMERALD system incorporates existing acquisition and processing tools, including SOD, TauP, GMT, and FISSURES/DHI, making much of the functionality of those tools available in a unified system with a user-friendly web browser interface. EMERALD is now in beta test. See emerald.asu.edu or contact john.d.west@asu.edu for more details.

  6. 2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek

    2016-04-01

    Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.

  7. Methods and apparatus for use in detecting seismic waves in a borehole

    DOEpatents

    West, Phillip B.; Fincke, James R.; Reed, Teddy R.

    2006-05-23

    The invention provides methods and apparatus for detecting seismic waves propagating through a subterranean formation surrounding a borehole. In a first embodiment, a sensor module uses the rotation of bogey wheels to extend and retract a sensor package for selective contact and magnetic coupling to casing lining the borehole. In a second embodiment, a sensor module is magnetically coupled to the casing wall during its travel and dragged therealong while maintaining contact therewith. In a third embodiment, a sensor module is interfaced with the borehole environment to detect seismic waves using coupling through liquid in the borehole. Two or more of the above embodiments may be combined within a single sensor array to provide a resulting seismic survey combining the optimum of the outputs of each embodiment into a single data set.

  8. Method for determining formation quality factor from well log data and its application to seismic reservoir characterization

    DOEpatents

    Walls, Joel; Taner, M. Turhan; Dvorkin, Jack

    2006-08-08

    A method for seismic characterization of subsurface Earth formations includes determining at least one of compressional velocity and shear velocity, and determining reservoir parameters of subsurface Earth formations, at least including density, from data obtained from a wellbore penetrating the formations. A quality factor for the subsurface formations is calculated from the velocity, the density and the water saturation. A synthetic seismogram is calculated from the calculated quality factor and from the velocity and density. The synthetic seismogram is compared to a seismic survey made in the vicinity of the wellbore. At least one parameter is adjusted. The synthetic seismogram is recalculated using the adjusted parameter, and the adjusting, recalculating and comparing are repeated until a difference between the synthetic seismogram and the seismic survey falls below a selected threshold.

  9. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field

    PubMed Central

    Paisley, John

    2018-01-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < ML < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity. PMID:29806015

  10. Seismic profile analysis of the Kangra and Dehradun re-entrant of NW Himalayan Foreland thrust belt, India: A new approach to delineate subsurface geometry

    NASA Astrophysics Data System (ADS)

    Dey, Joyjit; Perumal, R. Jayangonda; Sarkar, Subham; Bhowmik, Anamitra

    2017-08-01

    In the NW Sub-Himalayan frontal thrust belt in India, seismic interpretation of subsurface geometry of the Kangra and Dehradun re-entrant mismatch with the previously proposed models. These procedures lack direct quantitative measurement on the seismic profile required for subsurface structural architecture. Here we use a predictive angular function for establishing quantitative geometric relationships between fault and fold shapes with `Distance-displacement method' (D-d method). It is a prognostic straightforward mechanism to probe the possible structural network from a seismic profile. Two seismic profiles Kangra-2 and Kangra-4 of Kangra re-entrant, Himachal Pradesh (India), are investigated for the fault-related folds associated with the Balh and Paror anticlines. For Paror anticline, the final cut-off angle β =35{°} was obtained by transforming the seismic time profile into depth profile to corroborate the interpreted structures. Also, the estimated shortening along the Jawalamukhi Thrust and Jhor Fault, lying between the Himalayan Frontal Thrust (HFT) and the Main Boundary Thrust (MBT) in the frontal fold-thrust belt, were found to be 6.06 and 0.25 km, respectively. Lastly, the geometric method of fold-fault relationship has been exercised to document the existence of a fault-bend fold above the Himalayan Frontal Thrust (HFT). Measurement of shortening along the fault plane is employed as an ancillary tool to prove the multi-bending geometry of the blind thrust of the Dehradun re-entrant.

  11. Application of USNRC NUREG/CR-6661 and draft DG-1108 to evolutionary and advanced reactor designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang 'Apollo', Chen

    2006-07-01

    For the seismic design of evolutionary and advanced nuclear reactor power plants, there are definite financial advantages in the application of USNRC NUREG/CR-6661 and draft Regulatory Guide DG-1108. NUREG/CR-6661, 'Benchmark Program for the Evaluation of Methods to Analyze Non-Classically Damped Coupled Systems', was by Brookhaven National Laboratory (BNL) for the USNRC, and Draft Regulatory Guide DG-1108 is the proposed revision to the current Regulatory Guide (RG) 1.92, Revision 1, 'Combining Modal Responses and Spatial Components in Seismic Response Analysis'. The draft Regulatory Guide DG-1108 is available at http://members.cox.net/apolloconsulting, which also provides a link to the USNRC ADAMS site to searchmore » for NUREG/CR-6661 in text file or image file. The draft Regulatory Guide DG-1108 removes unnecessary conservatism in the modal combinations for closely spaced modes in seismic response spectrum analysis. Its application will be very helpful in coupled seismic analysis for structures and heavy equipment to reduce seismic responses and in piping system seismic design. In the NUREG/CR-6661 benchmark program, which investigated coupled seismic analysis of structures and equipment or piping systems with different damping values, three of the four participants applied the complex mode solution method to handle different damping values for structures, equipment, and piping systems. The fourth participant applied the classical normal mode method with equivalent weighted damping values to handle differences in structural, equipment, and piping system damping values. Coupled analysis will reduce the equipment responses when equipment, or piping system and structure are in or close to resonance. However, this reduction in responses occurs only if the realistic DG-1108 modal response combination method is applied, because closely spaced modes will be produced when structure and equipment or piping systems are in or close to resonance. Otherwise, the conservatism in the current Regulatory Guide 1.92, Revision 1, will overshadow the advantage of coupled analysis. All four participants applied the realistic modal combination method of DG-1108. Consequently, more realistic and reduced responses were obtained. (authors)« less

  12. Feasibility of using a seismic surface wave method to study seasonal and weather effects on shallow surface soils

    USDA-ARS?s Scientific Manuscript database

    The objective of the paper is to study the temporal variations of the subsurface soil properties due to seasonal and weather effects using a combination of a new seismic surface method and an existing acoustic probe system. A laser Doppler vibrometer (LDV) based multi-channel analysis of surface wav...

  13. Evaluation of Cross-Correlation Methods on a Massive Scale for Accurate Relocation of Seismic Events in East Asia

    DTIC Science & Technology

    2006-04-21

    purposes, such as scientific study of earthquake interactions in a fault zone or seismic sources associated with magma conduits in a volcano , relative... Kilauea , J. Geophys. Res., 99, 375-393. HARRIS, D.B. (1991), A waveform correlation method for identifying quarry explosions, Bull. Seismol. Soc. Am

  14. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  15. SHAKING TABLE TEST AND EFFECTIVE STRESS ANALYSIS ON SEISMIC PERFORMANCE WITH SEISMIC ISOLATION RUBBER TO THE INTERMEDIATE PART OF PILE FOUNDATION IN LIQUEFACTION

    NASA Astrophysics Data System (ADS)

    Uno, Kunihiko; Otsuka, Hisanori; Mitou, Masaaki

    The pile foundation is heavily damaged at the boundary division of the ground types, liquefied ground and non-liquefied ground, during an earthquake and there is a possibility of the collapse of the piles. In this study, we conduct a shaking table test and effective stress analysis of the influence of soil liquefaction and the seismic inertial force exerted on the pile foundation. When the intermediate part of the pile, there is at the boundary division, is subjected to section force, this part increases in size as compared to the pile head in certain instances. Further, we develop a seismic resistance method for a pile foundation in liquefaction using seismic isolation rubber and it is shown the middle part seismic isolation system is very effective.

  16. A simple algorithm for sequentially incorporating gravity observations in seismic traveltime tomography

    USGS Publications Warehouse

    Parsons, T.; Blakely, R.J.; Brocher, T.M.

    2001-01-01

    The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.

  17. Repeating ice-earthquakes beneath David Glacier from the 2012-2015 TAMNNET array

    NASA Astrophysics Data System (ADS)

    Walter, J. I.; Peng, Z.; Hansen, S. E.

    2017-12-01

    The continent of Antarctica has approximately the same surface area as the continental United States, though we know significantly less about its underlying geology and seismic activity. In recent years, improvements in seismic instrumentation, battery technology, and field deployment practices have allowed for continuous broadband stations throughout the dark Antarctic winter. We utilize broadband seismic data from a recent experiment (TAMNNET), which was originally proposed as a structural seismology experiment, for seismic event detection. Our target is to address fundamental questions about regional-scale crustal and environmental seismicity in the study region that comprises the Transantarctic Mountain area of Victoria and Oates Land. We identify most seismicity emanating from David Glacier, upstream of the Drygalski Ice Tongue, which has been documented by several other studies. In order to improve the catalog completeness for the David Glacier area, we utilize a matched-filter technique to identify potential missing earthquakes that may not have been originally detected. This technique utilizes existing cataloged waveforms as templates to scan through continuous data and to identify repeating or nearby earthquakes. With a more robust catalog, we evaluate relative changes in icequake positions, recurrence intervals, and other first-order information. In addition, we attempt to further refine locations of other regional seismicity using a variety of methods including body and surface wave polarization, beamforming, surface wave dispersion, and other seismological methods. This project highlights the usefulness of archiving raw datasets (i.e., passive seismic continuous data), so that researchers may apply new algorithms or techniques to test hypotheses not originally or specifically targeted by the original experimental design.

  18. Effects of volcano topography on seismic broad-band waveforms

    NASA Astrophysics Data System (ADS)

    Neuberg, Jürgen; Pointer, Tim

    2000-10-01

    Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.

  19. Study on comparison of special moment frame steel structure (SMF) and base isolation special moment frame steel structure (BI-SMF) in Indonesia

    NASA Astrophysics Data System (ADS)

    Setiawan, Jody; Nakazawa, Shoji

    2017-10-01

    This paper discusses about comparison of seismic response behaviors, seismic performance and seismic loss function of a conventional special moment frame steel structure (SMF) and a special moment frame steel structure with base isolation (BI-SMF). The validation of the proposed simplified estimation method of the maximum deformation of the base isolation system by using the equivalent linearization method and the validation of the design shear force of the superstructure are investigated from results of the nonlinear dynamic response analysis. In recent years, the constructions of steel office buildings with seismic isolation system are proceeding even in Indonesia where the risk of earthquakes is high. Although the design code for the seismic isolation structure has been proposed, there is no actual construction example for special moment frame steel structure with base isolation. Therefore, in this research, the SMF and BI-SMF buildings are designed by Indonesian Building Code which are assumed to be built at Padang City in Indonesia. The material of base isolation system is high damping rubber bearing. Dynamic eigenvalue analysis and nonlinear dynamic response analysis are carried out to show the dynamic characteristics and seismic performance. In addition, the seismic loss function is obtained from damage state probability and repair cost. For the response analysis, simulated ground accelerations, which have the phases of recorded seismic waves (El Centro NS, El Centro EW, Kobe NS and Kobe EW), adapted to the response spectrum prescribed by the Indonesian design code, that has, are used.

  20. Time-resolved seismic tomography at the EGS geothermal reservoir of Soultz-Sous-Forêts (France) during hydraulic stimulations. A comparison between different injection tests

    NASA Astrophysics Data System (ADS)

    Dorbath, C.; Calo, M.; Cornet, F.; Frogneux, M.

    2011-12-01

    One major goal of monitoring seismicity accompanying hydraulic fracturing of a reservoir is to recover the seismic velocity field in and around the geothermal site. Several studies have shown that the 4D (time dependent) seismic tomographies are very useful to illustrate and study the temporal variation of the seismic velocities conditioned by injected fluids. However, only an appropriate separation of the data in subsets and a reliable tomographic method allow studying representative variations of the seismic velocities during and after the injection periods. We present here new 4D seismic tomographies performed using datasets regarding some stimulation tests performed at the Enhanced Geothermal System (EGS) site of Soultz-sous-Forêts (Alsace, France). The data used were recorded during the stimulation tests occurred in 2000, 2003 and 2004 that involved the wells GPK2, GPK3 and GPK4. For each set of events, the subsetting of the data was performed by taking into account the injection parameters of the stimulation tests (namely the injected flow rate and the wellhead pressure). The velocity models have been obtained using the Double-Difference tomographic method (Zhang and Thurber 2003) and further improved with the post-processing WAM technique (Calo' et al., 2009, 2011). This technique resulted very powerful because combines high resolution and reliablity of the seismic velocity fields calculated even with small datasets. In this work we show the complete sequence of the time-lapse tomographies and their variations in time and between different stimulation tests.

  1. Nowcasting Induced Seismicity at the Groningen Gas Field in the Netherlands

    NASA Astrophysics Data System (ADS)

    Luginbuhl, M.; Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The Groningen natural gas field in the Netherlands has recently been a topic of controversy for many residents in the surrounding area. The gas field provides energy for the majority of the country; however, for a minority of Dutch citizens who live nearby, the seismicity induced by the gas field is a cause for major concern. Since the early 2000's, the region has seen an increase in both number and magnitude of events, the largest of which was a magnitude 3.6 in 2012. Earthquakes of this size and smaller easily cause infrastructural damage to older houses and farms built with single brick walls. Nowcasting is a new method of statistically classifying seismicity and seismic risk. In this paper, the method is applied to the induced seismicity at the natural gas fields in Groningen, Netherlands. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say , and one small say . The method utilizes the number of small earthquakes that occur between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that have occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time, which it does in this case. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of earthquakes in Groningen to nowcast the number of earthquakes in Groningen. The applicability of the scaling is illustrated during the rapid build up of seismicity between 2004 and 2016. It can now be used to forecast the expected reduction in seismicity associated with reduction in gas production.

  2. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  3. Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results

    NASA Astrophysics Data System (ADS)

    Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono

    2017-04-01

    Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.

  4. Post-seismic velocity changes following the 2010 Mw 7.1 Darfield earthquake, New Zealand, revealed by ambient seismic field analysis

    NASA Astrophysics Data System (ADS)

    Heckels, R. EG; Savage, M. K.; Townend, J.

    2018-05-01

    Quantifying seismic velocity changes following large earthquakes can provide insights into fault healing and reloading processes. This study presents temporal velocity changes detected following the 2010 September Mw 7.1 Darfield event in Canterbury, New Zealand. We use continuous waveform data from several temporary seismic networks lying on and surrounding the Greendale Fault, with a maximum interstation distance of 156 km. Nine-component, day-long Green's functions were computed for frequencies between 0.1 and 1.0 Hz for continuous seismic records from immediately after the 2010 September 04 earthquake until 2011 January 10. Using the moving-window cross-spectral method, seismic velocity changes were calculated. Over the study period, an increase in seismic velocity of 0.14 ± 0.04 per cent was determined near the Greendale Fault, providing a new constraint on post-seismic relaxation rates in the region. A depth analysis further showed that velocity changes were confined to the uppermost 5 km of the subsurface. We attribute the observed changes to post-seismic relaxation via crack healing of the Greendale Fault and throughout the surrounding region.

  5. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    NASA Astrophysics Data System (ADS)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  6. Lattice Boltzmann Simulation of Seismic Mobilization of Residual Oil in Sandstone

    NASA Astrophysics Data System (ADS)

    Guo, R.; Jiang, F.; Deng, W.

    2017-12-01

    Seismic stimulation is a promising technology for enhanced oil recovery. However, current mechanism studies are mainly in the single constricted tubes or idealized porous media, and no study has been conducted in real reservoir porous media. We have developed a numerical simulation which uses the lattice Boltzmann method to directly calculate the characteristics of residual oil clusters to quantify seismic mobilization of residual oil in real Berea sandstone in a scale of 400μm x 400μm x 400μm. The residual oil clusters will be firstly obtained by applying the water flooding scheme to the oil-saturated sandstone. Then, we will apply the seismic stimulation to the sandstone by converting the seismic effect to oscillatory inertial force and add to the pore fluids. This oscillatory inertial force causes the mobilization of residual oil by overcoming the capillary force. The response of water and oil to the seismic stimulation will be observed in our simulations. Two seismic oil mobilization mechanisms will be investigated: (1) the passive response of residual oil clusters to the seismic stimulation, and (2) the resonance of oil clusters subject to low frequency seismic stimulation. We will then discuss which mechanism should be the dominant mechanism for the seismic stimulation oil recovery for practical applications.

  7. Magma migration at the onset of the 2012-13 Tolbachik eruption revealed by Seismic Amplitude Ratio Analysis

    NASA Astrophysics Data System (ADS)

    Caudron, Corentin; Taisne, Benoit; Kugaenko, Yulia; Saltykov, Vadim

    2015-12-01

    In contrast of the 1975-76 Tolbachik eruption, the 2012-13 Tolbachik eruption was not preceded by any striking change in seismic activity. By processing the Klyuchevskoy volcano group seismic data with the Seismic Amplitude Ratio Analysis (SARA) method, we gain insights into the dynamics of magma movement prior to this important eruption. A clear seismic migration within the seismic swarm, started 20 hours before the reported eruption onset (05:15 UTC, 26 November 2012). This migration proceeded in different phases and ended when eruptive tremor, corresponding to lava flows, was recorded (at 11:00 UTC, 27 November 2012). In order to get a first order approximation of the magma location, we compare the calculated seismic intensity ratios with the theoretical ones. As expected, the observations suggest that the seismicity migrated toward the eruption location. However, we explain the pre-eruptive observed ratios by a vertical migration under the northern slope of Plosky Tolbachik volcano followed by a lateral migration toward the eruptive vents. Another migration is also captured by this technique and coincides with a seismic swarm that started 16-20 km to the south of Plosky Tolbachik at 20:31 UTC on November 28 and lasted for more than 2 days. This seismic swarm is very similar to the seismicity preceding the 1975-76 Tolbachik eruption and can be considered as a possible aborted eruption.

  8. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less

  9. Noise analysis of the seismic system employed in the northern and southern California seismic nets

    USGS Publications Warehouse

    Eaton, J.P.

    1984-01-01

    The seismic networks have been designed and operated to support recording on Develocorders (less than 40db dynamic range) and analog magnetic tape (about 50 db dynamic range). The principal analysis of the records has been based on Develocorder films; and background earth noise levels have been adjusted to be about 1 to 2 mm p-p on the film readers. Since the traces are separated by only 10 to 12 mm on the reader screen, they become hopelessly tangled when signal amplitudes on several adjacent traces exceed 10 to 20 mm p-p. Thus, the background noise level is hardly more than 20 db below the level of largest readable signals. The situation is somewhat better on tape playbacks, but the high level of background noise set to accomodate processing from film records effectively limits the range of maximum-signal to background-earth-noise on high gain channels to a little more than 30 db. Introduction of the PDP 11/44 seismic data acquisition system has increased the potential dynamic range of recorded network signals to more than 60 db. To make use of this increased dynamic range we must evaluate the characteristics and performance of the seismic system. In particular, we must determine whether the electronic noise in the system is or can be made sufficiently low so that background earth noise levels can be lowered significantly to take advantage of the increased dynamic range of the digital recording system. To come to grips with the complex problem of system noise, we have carried out a number of measurements and experiments to evaluate critical components of the system as well as to determine the noise characteristics of the system as a whole.

  10. Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Jeff

    2012-11-30

    Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have beenmore » the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.« less

  11. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    NASA Astrophysics Data System (ADS)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  12. Borehole prototype for seismic high-resolution exploration

    NASA Astrophysics Data System (ADS)

    Giese, Rüdiger; Jaksch, Katrin; Krauß, Felix; Krüger, Kay; Groh, Marco; Jurczyk, Andreas

    2014-05-01

    Target reservoirs for the exploitation of hydrocarbons or hot water for geothermal energy supply can comprise small layered structures, for instance thin layers or faults. The resolution of 2D and 3D surface seismic methods is often not sufficient to determine and locate these structures. Borehole seismic methods like vertical seismic profiling (VSP) and seismic while drilling (SWD) use either receivers or sources within the borehole. Thus, the distance to the target horizon is reduced and higher resolution images of the geological structures can be achieved. Even these methods are limited in their resolution capabilities with increasing target depth. To localize structures more accuracy methods with higher resolution in the range of meters are necessary. The project SPWD -- Seismic Prediction While Drilling aims at s the development of a borehole prototype which combines seismic sources and receivers in one device to improve the seismic resolution. Within SPWD such a prototype has been designed, manufactured and tested. The SPWD-wireline prototype is divided into three main parts. The upper section comprises the electronic unit. The middle section includes the upper receiver, the upper clamping unit as well as the source unit and the lower clamping unit. The lower section consists of the lower receiver unit and the hydraulic unit. The total length of the prototype is nearly seven meters and its weight is about 750 kg. For focusing the seismic waves in predefined directions of the borehole axis the method of phased array is used. The source unit is equipped with four magnetostrictive vibrators. Each can be controlled independently to get a common wave front in the desired direction of exploration. Source signal frequencies up to 5000 Hz are used, which allows resolutions up to one meter. In May and September 2013 field tests with the SPWD-wireline prototype have been carried out at the KTB Deep Crustal Lab in Windischeschenbach (Bavaria). The aim was to proof the pressure-tightness and the functionality of the hydraulic system components of the borehole device. To monitor the prototype four cameras and several moisture sensors were installed along the source and receiver units close to the extendable coupling stamps where an infiltration of fluid is most probably. The tests lasted about 48 hours each. It was possible to extend and to retract the coupling stamps of the prototype up to a depth of 2100 m. No infiltration of borehole fluids in the SPWD-tool was observed. In preparation of the acoustic calibration measurements in the research and education mine of the TU Bergakademie Freiberg seismic sources and receivers as well as the recording electronic devices were installed in the SPWD-wireline prototype at the GFZ. Afterwards, the SPWD-borehole device was transported to the GFZ-Underground-Lab and preliminary test measurements to characterize the radiation pattern characteristics have been carried out in the newly drilled vertical borehole in December 2013. Previous measurements with a laboratory borehole prototype have demonstrated a dependency of the radiated seismic energy from the predefined amplification direction, the wave type and the signal frequencies. SPWD is funded by the German Federal Environment Ministry

  13. A novel EBSD-based finite-element wave propagation model for investigating seismic anisotropy: Application to Finero Peridotite, Ivrea-Verbano Zone, Northern Italy

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Frehner, Marcel; Kunze, Karsten; Zappone, Alba

    2014-10-01

    A novel electron backscatter diffraction (EBSD) -based finite-element (FE) wave propagation simulation is presented and applied to investigate seismic anisotropy of peridotite samples. The FE model simulates the dynamic propagation of seismic waves along any chosen direction through representative 2D EBSD sections. The numerical model allows separation of the effects of crystallographic preferred orientation (CPO) and shape preferred orientation (SPO). The obtained seismic velocities with respect to specimen orientation are compared with Voigt-Reuss-Hill estimates and with laboratory measurements. The results of these three independent methods testify that CPO is the dominant factor controlling seismic anisotropy. Fracture fillings and minor minerals like hornblende only influence the seismic anisotropy if their volume proportion is sufficiently large (up to 23%). The SPO influence is minor compared to the other factors. The presented FE model is discussed with regard to its potential in simulating seismic wave propagation using EBSD data representing natural rock petrofabrics.

  14. Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images

    NASA Astrophysics Data System (ADS)

    Ely, G.; Malcolm, A. E.; Poliannikov, O. V.

    2017-12-01

    Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.

  15. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    USGS Publications Warehouse

    Pidlisecky, Adam; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  16. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  17. Correlation of Geophysical and Geotechnical Methods for Sediment Mapping in Sungai Batu, Kedah

    NASA Astrophysics Data System (ADS)

    Zakaria, M. T.; Taib, A.; Saidin, M. M.; Saad, R.; Muztaza, N. M.; Masnan, S. S. K.

    2018-04-01

    Exploration geophysics is widely used to map the subsurface characteristics of a region, to understand the underlying rock structures and spatial distribution of rock units. 2-D resistivity and seismic refraction methods were conducted in Sungai Batu locality with objective to identify and map the sediment deposit with correlation of borehole record. 2-D resistivity data was acquire using ABEM SAS4000 system with Pole-dipole array and 2.5 m minimum electrode spacing while for seismic refraction ABEM MK8 seismograph was used to record the seismic data and 5 kg sledgehammer used as a seismic source with geophones interval of 5 m spacing. The inversion model of 2-D resistivity result shows that, the resistivity values <100 Ωm was interpreted as saturated zone with while high resistivity values >500 Ωm as the hard layer for this study area. The seismic result indicates that the velocity values <2000 m/s represent as the highly-weathered soil consists of clay and sand while high velocity values >3600 m/s interpreted as the hard layer in this locality.

  18. Measurement of seismometer orientation using the tangential P-wave receiver function based on harmonic decomposition

    NASA Astrophysics Data System (ADS)

    Lim, Hobin; Kim, YoungHee; Song, Teh-Ru Alex; Shen, Xuzhang

    2018-03-01

    Accurate determination of the seismometer orientation is a prerequisite for seismic studies including, but not limited to seismic anisotropy. While borehole seismometers on land produce seismic waveform data somewhat free of human-induced noise, they might have a drawback of an uncertain orientation. This study calculates a harmonic decomposition of teleseismic receiver functions from the P and PP phases and determines the orientation of a seismometer by minimizing a constant term in a harmonic expansion of tangential receiver functions in backazimuth near and at 0 s. This method normalizes the effect of seismic sources and determines the orientation of a seismometer without having to assume for an isotropic medium. Compared to the method of minimizing the amplitudes of a mean of the tangential receiver functions near and at 0 s, the method yields more accurate orientations in cases where the backazimuthal coverage of earthquake sources (even in the case of ocean bottom seismometers) is uneven and incomplete. We apply this method to data from the Korean seismic network (52 broad-band velocity seismometers, 30 of which are borehole sensors) to estimate the sensor orientation in the period of 2005-2016. We also track temporal changes in the sensor orientation through the change in the polarity and the amplitude of the tangential receiver function. Six borehole stations are confirmed to experience a significant orientation change (10°-180°) over the period of 10 yr. We demonstrate the usefulness of our method by estimating the orientation of ocean bottom sensors, which are known to have high noise level during the relatively short deployment period.

  19. Reverse-time migration for subsurface imaging using single- and multi- frequency components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Kim, Y.; Kim, S.; Chung, W.; Shin, S.; Lee, D.

    2017-12-01

    Reverse-time migration is a seismic data processing method for obtaining accurate subsurface structure images from seismic data. This method has been applied to obtain more precise complex geological structure information, including steep dips, by considering wave propagation characteristics based on two-way traveltime. Recently, various studies have reported the characteristics of acquired datasets from different types of media. In particular, because real subsurface media is comprised of various types of structures, seismic data represent various responses. Among them, frequency characteristics can be used as an important indicator for analyzing wave propagation in subsurface structures. All frequency components are utilized in conventional reverse-time migration, but analyzing each component is required because they contain inherent seismic response characteristics. In this study, we propose a reverse-time migration method that utilizes single- and multi- frequency components for analyzing subsurface imaging. We performed a spectral decomposition to utilize the characteristics of non-stationary seismic data. We propose two types of imaging conditions, in which decomposed signals are applied in complex and envelope traces. The SEG/EAGE Overthrust model was used to demonstrate the proposed method, and the 1st derivative Gaussian function with a 10 Hz cutoff was used as the source signature. The results were more accurate and stable when relatively lower frequency components in the effective frequency range were used. By combining the gradient obtained from various frequency components, we confirmed that the results are clearer than the conventional method using all frequency components. Also, further study is required to effectively combine the multi-frequency components.

  20. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  1. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  2. Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.

    2014-12-01

    Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.

  3. Diffraction Seismic Imaging of the Chalk Group Reservoir Rocks

    NASA Astrophysics Data System (ADS)

    Montazeri, M.; Fomel, S.; Nielsen, L.

    2016-12-01

    In this study we investigate seismic diffracted waves instead of seismic reflected waves, which are usually much stronger and carry most of the information regarding subsurface structures. The goal of this study is to improve imaging of small subsurface features such as faults and fractures. Moreover, we focus on the Chalk Group, which contains important groundwater resources onshore and oil and gas reservoirs in the Danish sector of the North Sea. Finding optimum seismic velocity models for the Chalk Group and estimating high-quality stacked sections with conventional processing methods are challenging tasks. Here, we try to filter out as much as possible of undesired arrivals before stacking the seismic data. Further, a plane-wave destruction method is applied on the seismic stack in order to dampen the reflection events and thereby enhance the visibility of the diffraction events. After this initial processing, we estimate the optimum migration velocity using diffraction events in order to obtain a better resolution stack. The results from this study demonstrate how diffraction imaging can be used as an additional tool for improving the images of small-scale features in the Chalk Group reservoir, in particular faults and fractures. Moreover, we discuss the potential of applying this approach in future studies focused on such reservoirs.

  4. High-resolution seismic data regularization and wavefield separation

    NASA Astrophysics Data System (ADS)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  5. Strong earthquakes, novae and cosmic ray environment

    NASA Technical Reports Server (NTRS)

    Yu, Z. D.

    1985-01-01

    Observations about the relationship between seismic activity and astronomical phenomena are discussed. First, after investigating the seismic data (magnitude 7.0 and over) with the method of superposed epochs it is found that world seismicity evidently increased after the occurring of novae with apparent magnitude brighter than 2.2. Second, a great many earthquakes of magnitude 7.0 and over occurred in the 13th month after two of the largest ground level solar cosmic ray events (GLEs). The causes of three high level phenomena of global seismic activity in 1918-1965 can be related to these, and it is suggested that according to the information of large GLE or bright nova predictions of the times of global intense seismic activity can be made.

  6. High-Resolution Analysis of Seismicity Induced at Berlín Geothermal Field, El Salvador

    NASA Astrophysics Data System (ADS)

    Kwiatek, G.; Bulut, F.; Dresen, G. H.; Bohnhoff, M.

    2012-12-01

    We investigate induced microseismic activity monitored at Berlín Geothermal Field, El Salvador, during a hydraulic stimulation. The site was monitored for a time period of 17 months using thirteen 3-component seismic stations located in shallow boreholes. Three stimulations were performed in the well TR8A with a maximum injection rate and well head pressure of 160l/s and 130bar, respectively. For the entire time period of our analysis, the acquisition system recorded 581 events with moment magnitudes ranging between -0.5 and 3.7. The initial seismic catalog provided by the operator was substantially improved: 1) We re-picked P- and S-wave onsets and relocated the seismic events using the double-difference relocation algorithm based on cross-correlation derived differential arrival time data. Forward modeling was performed using a local 1D velocity model instead of homogeneous full-space. 2) We recalculated source parameters using the spectral fitting method and refined the results applying the spectral ratio method. We investigated the source parameters and spatial and temporal changes of the seismic activity based on the refined dataset and studied the correlation between seismic activity and production. The achieved hypocentral precision allowed resolving the spatiotemporal changes in seismic activity down to a scale of a few meters. The application of spectral ratio method significantly improved the quality of source parameters in a high-attenuating and complex geological environment. Of special interest is the largest event (Mw3.7) and its nucleation process. We investigate whether the refined seismic data display any signatures that the largest event is triggered by the shut-in of the well. We found seismic activity displaying clear spatial and temporal patterns that could be easily related to the amount of water injected into the well TR8A and other reinjection wells in the investigated area. The migration of seismicity outside of injection point is observed while injection rate is increasing. The locations of migrating seismic events are related to the existing fault system that is independently supported by calculated focal mechanisms. We found that the event migration occurs until the shut-in of the well. We observe that the large magnitude events are observed right after the shut-in, located in undamaged parts of the fault system. Results show that the following stimulation episodes require increased injection rate level (or increased well head pressure) to re-activate the seismic activity (Kaiser Effect, "Crustal memory" effect). The static stress drop values increase with the distance from injection point that is interpreted to be related to pore pressure perturbations introduced by stimulation of the injection well.

  7. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  8. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  9. Rockfall induced seismic signals: case study in Montserrat, Catalonia

    NASA Astrophysics Data System (ADS)

    Vilajosana, I.; Suriñach, E.; Abellán, A.; Khazaradze, G.; Garcia, D.; Llosa, J.

    2008-08-01

    After a rockfall event, a usual post event survey includes qualitative volume estimation, trajectory mapping and determination of departing zones. However, quantitative measurements are not usually made. Additional relevant quantitative information could be useful in determining the spatial occurrence of rockfall events and help us in quantifying their size. Seismic measurements could be suitable for detection purposes since they are non invasive methods and are relatively inexpensive. Moreover, seismic techniques could provide important information on rockfall size and location of impacts. On 14 February 2007 the Avalanche Group of the University of Barcelona obtained the seismic data generated by an artificially triggered rockfall event at the Montserrat massif (near Barcelona, Spain) carried out in order to purge a slope. Two 3 component seismic stations were deployed in the area about 200 m from the explosion point that triggered the rockfall. Seismic signals and video images were simultaneously obtained. The initial volume of the rockfall was estimated to be 75 m3 by laser scanner data analysis. After the explosion, dozens of boulders ranging from 10-4 to 5 m3 in volume impacted on the ground at different locations. The blocks fell down onto a terrace, 120 m below the release zone. The impact generated a small continuous mass movement composed of a mixture of rocks, sand and dust that ran down the slope and impacted on the road 60 m below. Time, time-frequency evolution and particle motion analysis of the seismic records and seismic energy estimation were performed. The results are as follows: 1 A rockfall event generates seismic signals with specific characteristics in the time domain; 2 the seismic signals generated by the mass movement show a time-frequency evolution different from that of other seismogenic sources (e.g. earthquakes, explosions or a single rock impact). This feature could be used for detection purposes; 3 particle motion plot analysis shows that the procedure to locate the rock impact using two stations is feasible; 4 The feasibility and validity of seismic methods for the detection of rockfall events, their localization and size determination are comfirmed.

  10. Seismic density and its relationship with strong historical earthquakes around Beijing, China

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2012-12-01

    As you know, Beijing is the capital of China. The regional earthquake observation networks have been built around Beijing (115.0°-119.3°E, 38.5°-41.0°N) since 1966. From 1970 to 2009, total 20281 earthquakes were recorded. The accumulation of these data raised a fundamental question: what are the characteristics and the physical nature of small earthquakes? In order to answer such question, we must use a quantitative method to deal with seismic pattern. Here we introduce a new concept of seismic density. The method emphasize that we must pay attention to the accuracy of the epicentre location, but no correction is made for the focal depth, because in any case this uncertainty is in any case greater than that of the epicenter. On the basis of these instrumental data, seismic patterns were calculated. The results illustrate that seismic density is the main character of the seismic pattern. Temporal distribution of small earthquakes in each seismic density zone is analyzed quantitatively. According to the statistics, mainly two types of seismic density are distinguished. Besides of the instrumental data, abundant information of historical earthquakes around Beijing is found in the archives, total 15 strong historical earthquake (M>=6). The earliest one occurred in September 294. After comparing, a very interesting phenomenon was noticed that the epicenters of strong historical earthquakes with high accuracy location corresponding with one of the seismic density type, which temporal distribution is almost stationary. This correspondent means small earthquakes still cluster near the epicenters of historical earthquakes, even if those occurred several hundred years ago. The mechanics of the relationship is analyzed. Strong historical earthquakes and seismic density of small earthquakes are consistent in each case, which reveals the persistent weakness of local crustal medium together. We utilized this relationship to improve the strong historical earthquake locations with large errors. The results of this paper let us know more about the complex of seismicity, even better understanding about regional stress and medium of local crust.

  11. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support-vector network to various classical learning algorithms used before in seismic detection and classification is an essential final step to analyze the advantages and disadvantages of the model.

  12. Safety design approach for external events in Japan sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamano, H.; Kubo, S.; Tani, A.

    2012-07-01

    This paper describes a safety design approach for external events in the design study of Japan sodium-cooled fast reactor. An emphasis is introduction of a design extension external condition (DEEC). In addition to seismic design, other external events such as tsunami, strong wind, abnormal temperature, etc. were addressed in this study. From a wide variety of external events consisting of natural hazards and human-induced ones, a screening method was developed in terms of siting, consequence, frequency to select representative events. Design approaches for these events were categorized on the probabilistic, statistical and deterministic basis. External hazard conditions were considered mainlymore » for DEECs. In the probabilistic approach, the DEECs of earthquake, tsunami and strong wind were defined as 1/10 of exceedance probability of the external design bases. The other representative DEECs were also defined based on statistical or deterministic approaches. (authors)« less

  13. pick_xwell, a program for interactive picking of crosswell seismic and radar data

    USGS Publications Warehouse

    Ellefsen, K.J.

    1999-01-01

    travel times can be plotted on the computer screen or printed to a file in postscript format. The program is written in the IDL programming language, and it is executed, in command-line mode, within the IDL program. The IDL program must be run from an X-window terminal that is connected to a computer with the Unix operating system. The data must be in the SU format.

  14. 1-D seismic velocity model and hypocenter relocation using double difference method around West Papua region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabtaji, Agung, E-mail: sabtaji.agung@gmail.com, E-mail: agung.sabtaji@bmkg.go.id; Indonesia’s Agency for Meteorological, Climatological and Geophysics Region V, Jayapura 1572; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as inputmore » for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.« less

  15. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    NASA Astrophysics Data System (ADS)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  16. Improving resolution of crosswell seismic section based on time-frequency analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, H.; Li, Y.

    1994-12-31

    According to signal theory, to improve resolution of seismic section is to extend high-frequency band of seismic signal. In cross-well section, sonic log can be regarded as a reliable source providing high-frequency information to the trace near the borehole. In such case, what to do is to introduce this high-frequency information into the whole section. However, neither traditional deconvolution algorithms nor some new inversion methods such as BCI (Broad Constraint Inversion) are satisfied because of high-frequency noise and nonuniqueness of inversion results respectively. To overcome their disadvantages, this paper presents a new algorithm based on Time-Frequency Analysis (TFA) technology whichmore » has been increasingly received much attention as an useful signal analysis too. Practical applications show that the new method is a stable scheme to improve resolution of cross-well seismic section greatly without decreasing Signal to Noise Ratio (SNR).« less

  17. Airgun inter-pulse noise field during a seismic survey in an Arctic ultra shallow marine environment.

    PubMed

    Guan, Shane; Vignola, Joseph; Judge, John; Turo, Diego

    2015-12-01

    Offshore oil and gas exploration using seismic airguns generates intense underwater pulses that could cause marine mammal hearing impairment and/or behavioral disturbances. However, few studies have investigated the resulting multipath propagation and reverberation from airgun pulses. This research uses continuous acoustic recordings collected in the Arctic during a low-level open-water shallow marine seismic survey, to measure noise levels between airgun pulses. Two methods were used to quantify noise levels during these inter-pulse intervals. The first, based on calculating the root-mean-square sound pressure level in various sub-intervals, is referred to as the increment computation method, and the second, which employs the Hilbert transform to calculate instantaneous acoustic amplitudes, is referred to as the Hilbert transform method. Analyses using both methods yield similar results, showing that the inter-pulse sound field exceeds ambient noise levels by as much as 9 dB during relatively quiet conditions. Inter-pulse noise levels are also related to the source distance, probably due to the higher reverberant conditions of the very shallow water environment. These methods can be used to quantify acoustic environment impacts from anthropogenic transient noises (e.g., seismic pulses, impact pile driving, and sonar pings) and to address potential acoustic masking affecting marine mammals.

  18. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  19. Analysis of the seismic signals generated by controlled single-block rockfalls on soft clay shales sediments: the Rioux Bourdoux slope experiment (French Alps).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Bourrier, Franck; Berger, Frédéric; Bornemann, Pierrick; Borgniet, Laurent; Tardif, Pascal; Mermin, Eric

    2016-04-01

    Understanding the dynamics of rockfalls is critical to mitigate the associated hazards but is made very difficult by the nature of these natural disasters that makes them hard to observe directly. Recent advances in seismology allow to determine the dynamics of the largest landslides on Earth from the very low-frequency seismic waves they generate. However, the vast majority of rockfalls that occur worldwide are too small to generate such low-frequency seismic waves and thus these methods cannot be used to reconstruct their dynamics. However, if seismic sensors are close enough, these events will generate high-frequency seismic signals. Unfortunately we cannot yet use these high-frequency seismic records to infer parameters synthetizing the rockfall dynamics as the source of these waves is not well understood. One of the first steps towards understanding the physical processes involved in the generation of high-frequency seismic waves by rockfalls is to study the link between the dynamics of a single block propagating along a well-known path and the features of the seismic signal generated. We conducted controlled releases of single blocks of limestones in a gully of clay-shales (e.g. black marls) in the Rioux Bourdoux torrent (French Alps). 28 blocks, with masses ranging from 76 kg to 472 kg, were released. A monitoring network combining high-velocity cameras, a broadband seismometer and an array of 4 high-frequency seismometers was deployed near the release area and along the travel path. The high-velocity cameras allow to reconstruct the 3D trajectories of the blocks, to estimate their velocities and the position of the different impacts with the slope surface. These data are compared to the seismic signals recorded. As the distance between the block and the seismic sensors at the time of each impact is known, we can determine the associated seismic signal amplitude corrected from propagation and attenuation effects. We can further compare the velocity, the energy and the momentum of the block at each impact to the true amplitude and the energy of the corresponding part of the seismic signal. Finding potential correlations and scaling laws between the dynamics of the source and the high-frequency seismic signal features constitutes an important breakthrough to understand more complex slope movements that involve multiple blocks or granular flows. This approach may lead to future developments of methods able to determine the dynamics of a large variety of slope movements directly from the seismic signals they generate.

  20. Research on Seismic Wave Attenuation in Gas Hydrates Layer Using Vertical Cable Seismic Data

    NASA Astrophysics Data System (ADS)

    Wang, Xiangchun; Liang, Lunhang; Wu, Zhongliang

    2018-06-01

    Vertical cable seismic (VCS) data are the most suitable seismic data for estimating the quality factor Q values of layers under the sea bottom by now. Here the quality factor Q values are estimated using the high-precision logarithmic spectrum ratio method for VCS data. The estimated Q values are applied to identify the layers with gas hydrates and free gas. From the results it can be seen that the Q value in layer with gas hydrates becomes larger and the Q value in layer with free gas becomes smaller than layers without gas hydrates or free gas. Additionally, the estimated Q values are used for inverse Q filtering processing to compensate the attenuated seismic signal's high-frequency component. From the results it can be seen that the main frequency of seismic signal is improved and the frequency band is broadened, the resolution of the VCS data is improved effectively.

  1. Crustal wavespeed structure of North Texas and Oklahoma based on ambient noise cross-correlation functions and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, H.

    2017-12-01

    Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.

  2. EMERALD: Coping with the Explosion of Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2009-12-01

    The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.

  3. Natural or Induced: Identifying Natural and Induced Swarms from Pre-production and Co-production Microseismic Catalogs at the Coso Geothermal Field

    USGS Publications Warehouse

    Schoenball, Martin; Kaven, Joern; Glen, Jonathan M. G.; Davatzes, Nicholas C.

    2015-01-01

    Increased levels of seismicity coinciding with injection of reservoir fluids have prompted interest in methods to distinguish induced from natural seismicity. Discrimination between induced and natural seismicity is especially difficult in areas that have high levels of natural seismicity, such as the geothermal fields at the Salton Sea and Coso, both in California. Both areas show swarm-like sequences that could be related to natural, deep fluid migration as part of the natural hydrothermal system. Therefore, swarms often have spatio-temporal patterns that resemble fluid-induced seismicity, and might possibly share other characteristics. The Coso Geothermal Field and its surroundings is one of the most seismically active areas in California with a large proportion of its activity occurring as seismic swarms. Here we analyze clustered seismicity in and surrounding the currently produced reservoir comparatively for pre-production and co-production periods. We perform a cluster analysis, based on the inter-event distance in a space-time-energy domain to identify notable earthquake sequences. For each event j, the closest previous event i is identified and their relationship categorized. If this nearest neighbor’s distance is below a threshold based on the local minimum of the bimodal distribution of nearest neighbor distances, then the event j is included in the cluster as a child to this parent event i. If it is above the threshold, event j begins a new cluster. This process identifies subsets of events whose nearest neighbor distances and relative timing qualify as a cluster as well as a characterizing the parent-child relationships among events in the cluster. We apply this method to three different catalogs: (1) a two-year microseismic survey of the Coso geothermal area that was acquired before exploration drilling in the area began; (2) the HYS_catalog_2013 that contains 52,000 double-difference relocated events and covers the years 1981 to 2013; and (3) a catalog of 57,000 events with absolute locations from the local network recorded between 2002 and 2007. Using this method we identify 10 clusters of more than 20 events each in the pre-production survey and more than 200 distinct seismicity clusters that each contain at least 20 and up to more than 1000 earthquakes in the more extensive catalogs. The cluster identification method used yields a hierarchy of links between multiple generations of parent and offspring events. We analyze different topological parameters of this hierarchy to better characterize and thus differentiate natural swarms from induced clustered seismicity and also to identify aftershock sequences of notable mainshocks. We find that the branching characteristic given by the average number of child events per parent event is significantly different for clusters below than for clusters around the produced field.

  4. Recent faulting in western Nevada revealed by multi-scale seismic reflection

    USGS Publications Warehouse

    Frary, Roxanna N.; Louie, John N.; Stephenson, William J.; Odum, Jackson K.; Kell, Annie; Eisses, Amy; Kent, Graham M.; Driscoll, Neal W.; Karlin, Robert; Baskin, Robert L.; Pullammanappallil, Satish; Liberty, Lee M.

    2011-01-01

    The main goal of this study is to compare different reflection methods used to image subsurface structure within different physical environments in western Nevada. With all the methods employed, the primary goal is fault imaging for structural information toward geothermal exploration and seismic hazard estimation. We use seismic CHIRP (a swept-frequency marine acquisition system), weight drop (an accelerated hammer source), and two different vibroseis systems to characterize fault structure. We focused our efforts in the Reno metropolitan area and the area within and surrounding Pyramid Lake in northern Nevada. These different methods have provided valuable constraints on the fault geometry and activity, as well as associated fluid movement. These are critical in evaluating the potential for large earthquakes in these areas, and geothermal exploration possibilities near these structures.

  5. Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion

    NASA Astrophysics Data System (ADS)

    Harris, C. K.; Bourne, S. J.

    2017-05-01

    In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property that is shared by the sum of an arbitrary number of such variables. The technique involves applying the Laplace transform to the normalized sum (which is simply the product of the Laplace transforms of the densities of the individual variables, with a suitable scaling of the Laplace variable), and then inverting it numerically using the Gaver-Stehfest algorithm. After validating the method using a number of test cases, it was applied to address the distribution of total seismic moment, and the quantiles computed for various numbers of seismic events were compared with those obtained in the literature using Monte Carlo simulation. Excellent agreement was obtained. As an application, the method was applied to the evolution of total seismic moment released by tremors due to gas production in the Groningen gas field in the northeastern Netherlands. The speed, accuracy and ease of implementation of the method allows the development of accurate correlations for constraining statistical seismological models using, for example, the maximum-likelihood method. It should also be of value in other natural processes governed by Pareto distributions with exponent less than unity.

  6. Innovative Approaches for Seismic Studies of Mars (Invited)

    NASA Astrophysics Data System (ADS)

    Banerdt, B.

    2010-12-01

    In addition to its intrinsic interest, Mars is particularly well-suited for studying the full range of processes and phenomena related to early terrestrial planet evolution, from initial differentiation to the start of plate tectonics. It is large and complex enough to have undergone most of the processes that affected early Earth but, unlike the Earth, has apparently not undergone extensive plate tectonics or other major reworking that erased the imprint of early events (as evidenced by the presence of cratered surfaces older than 4 Ga). The martian mantle should have Earth-like polymorphic phase transitions and may even support a perovskite layer near the core (depending on the actual core radius), a characteristic that would have major implications for core cooling and mantle convection. Thus even the most basic measurements of planetary structure, such as crustal thickness, core radius and state (solid/liquid), and gross mantle velocity structure would provide invaluable constraints on models of early planetary evolution. Despite this strong scientific motivation (and several failed attempts), Mars remains terra incognita from a seismic standpoint. This is due to an unfortunate convergence of circumstances, prominent among which are our uncertainty in the level of seismic activity and the relatively high cost of landing multiple long-lived spacecraft on Mars to comprise a seismic network for body-wave travel-time analysis; typically four to ten stations are considered necessary for this type of experiment. In this presentation I will address both of these issues. In order to overcome the concern about a possible lack of marsquakes with which to work, it is useful to identify alternative methods for using seismic techniques to probe the interior. Seismology without quakes can be accomplished in a number of ways. “Unconventional” sources of seismic energy include meteorites (which strike the surface of Mars at a relatively high rate), artificial projectiles (which can supply up to 1010 J of kinetic energy), seismic “hum” from meteorological forcing, and tidal deformation from Phobos (with a period around 6 hours). Another means for encouraging a seismic mission to Mars is to promote methods that can derive interior information from a single seismometer. Fortunately many such methods exist, including source location through P-S and back-azimuth, receiver functions, identification of later phases (PcP, PKP, etc.), surface wave dispersion, and normal mode analysis (from single large events, stacked events, or background noise). Such methods could enable the first successful seismic investigation of another planet since the Apollo seismometers were turned off almost 35 years ago.

  7. Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Sullivan, T. J.

    2012-04-01

    The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.

  8. Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Ye, F.; Lyu, S.; Lin, J.

    2017-12-01

    In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.

  9. Amplitude interpretation and visualization of three-dimensional reflection data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enachescu, M.E.

    1994-07-01

    Digital recording and processing of modern three-dimensional surveys allow for relative good preservation and correct spatial positioning of seismic reflection amplitude. A four-dimensional seismic reflection field matrix R (x,y,t,A), which can be computer visualized (i.e., real-time interactively rendered, edited, and animated), is now available to the interpreter. The amplitude contains encoded geological information indirectly related to lithologies and reservoir properties. The magnitude of the amplitude depends not only on the acoustic impedance contrast across a boundary, but is also strongly affected by the shape of the reflective boundary. This allows the interpreter to image subtle tectonic and structural elements notmore » obvious on time-structure maps. The use of modern workstations allows for appropriate color coding of the total available amplitude range, routine on-screen time/amplitude extraction, and late display of horizon amplitude maps (horizon slices) or complex amplitude-structure spatial visualization. Stratigraphic, structural, tectonic, fluid distribution, and paleogeographic information are commonly obtained by displaying the amplitude variation A = A(x,y,t) associated with a particular reflective surface or seismic interval. As illustrated with several case histories, traditional structural and stratigraphic interpretation combined with a detailed amplitude study generally greatly enhance extraction of subsurface geological information from a reflection data volume. In the context of three-dimensional seismic surveys, the horizon amplitude map (horizon slice), amplitude attachment to structure and [open quotes]bright clouds[close quotes] displays are very powerful tools available to the interpreter.« less

  10. Bringing science from the top of the world to the rest of the world: using video to describe earthquake research in Nepal following the devastating 2015 M7.8 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Karplus, M. S.; Barajas, A.; Garibay, L.

    2016-12-01

    In response to the April 25, 2015 M7.8 earthquake on the Main Himalayan Thrust in Nepal, NSF Geosciences funded a rapid seismological response project entitled NAMASTE (Nepal Array Measuring Aftershock Seismicity Trailing Earthquake). This project included the deployment, maintenance, and demobilization of a network of 45 temporary seismic stations from June 2015 to May 2016. During the demobilization of the seismic network, video footage was recorded to tell the story of the NAMASTE team's seismic research in Nepal using short movies. In this presentation, we will describe these movies and discuss our strategies for effectively communicating this research to both the academic and general public with the goals of promoting earthquake hazards and international awareness and inspiring enthusiasm about learning and participating in science research. For example, an initial screening of these videos took place for an Introduction to Geology class at the University of Texas at El Paso to obtain feedback from approximately 100 first-year students with only a basic geology background. The feedback was then used to inform final cuts of the video suitable for a range of audiences, as well as to help guide future videography of field work. The footage is also being cut into a short, three-minute video to be featured on the website of The University of Texas at El Paso, home to several of the NAMASTE team researchers.

  11. Analysis of Wave Fields induced by Offshore Pile Driving

    NASA Astrophysics Data System (ADS)

    Ruhnau, M.; Heitmann, K.; Lippert, T.; Lippert, S.; von Estorff, O.

    2015-12-01

    Impact pile driving is the common technique to install foundations for offshore wind turbines. With each hammer strike the steel pile - often exceeding 6 m in diameter and 80 m in length - radiates energy into the surrounding water and soil, until reaching its targeted penetration depth. Several European authorities introduced limitations regarding hydroacoustic emissions during the construction process to protect marine wildlife. Satisfying these regulations made the development and application of sound mitigation systems (e.g. bubble curtains or insulation screens) inevitable, which are commonly installed within the water column surrounding the pile or even the complete construction site. Last years' advances have led to a point, where the seismic energy tunneling the sound mitigation systems through the soil and radiating back towards the water column gains importance, as it confines the maximum achievable sound mitigation. From an engineering point of view, the challenge of deciding on an effective noise mitigation layout arises, which especially requires a good understanding of the soil-dependent wave field. From a geophysical point of view, the pile acts like a very unique line source, generating a characteristic wave field dominated by inclined wave fronts, diving as well as head waves. Monitoring the seismic arrivals while the pile penetration steadily increases enables to perform quasi-vertical seismic profiling. This work is based on datasets that have been collected within the frame of three comprehensive offshore measurement campaigns during pile driving and demonstrates the potential of seismic arrivals induced by pile driving for further soil characterization.

  12. Improvement of coda phase detectability and reconstruction of global seismic data using frequency-wavenumber methods

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng

    2018-02-01

    Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.

  13. Methods and systems for low frequency seismic and infrasound detection of geo-pressure transition zones

    DOEpatents

    Shook, G. Michael; LeRoy, Samuel D.; Benzing, William M.

    2006-07-18

    Methods for determining the existence and characteristics of a gradational pressurized zone within a subterranean formation are disclosed. One embodiment involves employing an attenuation relationship between a seismic response signal and increasing wavelet wavelength, which relationship may be used to detect a gradational pressurized zone and/or determine characteristics thereof. In another embodiment, a method for analyzing data contained within a response signal for signal characteristics that may change in relation to the distance between an input signal source and the gradational pressurized zone is disclosed. In a further embodiment, the relationship between response signal wavelet frequency and comparative amplitude may be used to estimate an optimal wavelet wavelength or range of wavelengths used for data processing or input signal selection. Systems for seismic exploration and data analysis for practicing the above-mentioned method embodiments are also disclosed.

  14. Discovering new events beyond the catalogue—application of empirical matched field processing to Salton Sea geothermal field seismicity

    DOE PAGES

    Wang, Jingbo; Templeton, Dennise C.; Harris, David B.

    2015-07-30

    Using empirical matched field processing (MFP), we compare 4 yr of continuous seismic data to a set of 195 master templates from within an active geothermal field and identify over 140 per cent more events than were identified using traditional detection and location techniques alone. In managed underground reservoirs, a substantial fraction of seismic events can be excluded from the official catalogue due to an inability to clearly identify seismic-phase onsets. Empirical MFP can improve the effectiveness of current seismic detection and location methodologies by using conventionally located events with higher signal-to-noise ratios as master events to define wavefield templatesmore » that could then be used to map normally discarded indistinct seismicity. Since MFP does not require picking, it can be carried out automatically and rapidly once suitable templates are defined. In this application, we extend MFP by constructing local-distance empirical master templates using Southern California Earthquake Data Center archived waveform data of events originating within the Salton Sea Geothermal Field. We compare the empirical templates to continuous seismic data collected between 1 January 2008 and 31 December 2011. The empirical MFP method successfully identifies 6249 additional events, while the original catalogue reported 4352 events. The majority of these new events are lower-magnitude events with magnitudes between M0.2–M0.8. Here, the increased spatial-temporal resolution of the microseismicity map within the geothermal field illustrates how empirical MFP, when combined with conventional methods, can significantly improve seismic network detection capabilities, which can aid in long-term sustainability and monitoring of managed underground reservoirs.« less

  15. Seismicity detection around the subduting seamount off Ibaraki the Japan Trench using dense OBS array data

    NASA Astrophysics Data System (ADS)

    Nakatani, Y.; Mochizuki, K.; Shinohara, M.; Yamada, T.; Hino, R.; Ito, Y.; Murai, Y.; Sato, T.

    2013-12-01

    A subducting seamount which has a height of about 3 km was revealed off Ibaraki in the Japan Trench by a seismic survey (Mochizuki et al., 2008). Mochizuki et al. (2008) also interpreted that interplate coupling was weak over the seamount because seismicity was low and the slip of the recent large earthquake did not propagate over it. To carry out further investigation, we deployed dense ocean bottom seismometers (OBSs) array around the seamount for about a year. During the observation period, seismicity off Ibaraki was activated due to the occurrence of the 2011 Tohoku earthquake. The southern edge of the mainshock rupture area was considered to be located around off Ibaraki by many source analyses. Moreover, Kubo et al. (2013) proposes the seamount played an important role in the rupture termination of the largest aftershock. Therefore, in this study, we try to understand about spatiotemporal variation of seismicity around the seamount before and after the Mw 9.0 event as a first step to elucidate relationship between the subducting seamount and seismogenic behavior. We used velocity waveforms of 1 Hz long-term OBSs which were densely deployed at station intervals of about 6 km. The sampling rate is 200 Hz and the observation period is from October 16, 2010 to September 19, 2011. Because of the ambient noise and effects of thick seafloor sediments, it is difficult to apply methods which have been used to on-land observational data for detecting seismicity to OBS data and to handle continuous waveforms automatically. We therefore apply back-projection method (e.g., Kiser and Ishii, 2012) to OBS waveform data which estimate energy-release source by stacking waveforms. Among many back-projection methods, we adopt a semblance analysis (e.g., Honda et al., 2008) which can detect feeble waves. First of all, we constructed a 3-D velocity structure model off Ibaraki by compiling the results of marine seismic surveys (e.g., Nakahigashi et al., 2012). Then, we divided a target area into small areas and calculated P-wave traveltimes between each station and all small areas by fast marching method (Rawlinson et al., 2006). After constructing theoretical travel-time tables, we applied a proper frequency filter to the observed waveforms and estimated seismic energy release by projecting semblance values. As the result of applying our method, we could successfully detect magnitude 2-3 earthquakes.

  16. Tunnel Detection Using Seismic Methods

    NASA Astrophysics Data System (ADS)

    Miller, R.; Park, C. B.; Xia, J.; Ivanov, J.; Steeples, D. W.; Ryden, N.; Ballard, R. F.; Llopis, J. L.; Anderson, T. S.; Moran, M. L.; Ketcham, S. A.

    2006-05-01

    Surface seismic methods have shown great promise for use in detecting clandestine tunnels in areas where unauthorized movement beneath secure boundaries have been or are a matter of concern for authorities. Unauthorized infiltration beneath national borders and into or out of secure facilities is possible at many sites by tunneling. Developments in acquisition, processing, and analysis techniques using multi-channel seismic imaging have opened the door to a vast number of near-surface applications including anomaly detection and delineation, specifically tunnels. Body waves have great potential based on modeling and very preliminary empirical studies trying to capitalize on diffracted energy. A primary limitation of all seismic energy is the natural attenuation of high-frequency energy by earth materials and the difficulty in transmitting a high- amplitude source pulse with a broad spectrum above 500 Hz into the earth. Surface waves have shown great potential since the development of multi-channel analysis methods (e.g., MASW). Both shear-wave velocity and backscatter energy from surface waves have been shown through modeling and empirical studies to have great promise in detecting the presence of anomalies, such as tunnels. Success in developing and evaluating various seismic approaches for detecting tunnels relies on investigations at known tunnel locations, in a variety of geologic settings, employing a wide range of seismic methods, and targeting a range of uniquely different tunnel geometries, characteristics, and host lithologies. Body-wave research at the Moffat tunnels in Winter Park, Colorado, provided well-defined diffraction-looking events that correlated with the subsurface location of the tunnel complex. Natural voids related to karst have been studied in Kansas, Oklahoma, Alabama, and Florida using shear-wave velocity imaging techniques based on the MASW approach. Manmade tunnels, culverts, and crawl spaces have been the target of multi-modal analysis in Kansas and California. Clandestine tunnels used for illegal entry into the U.S. from Mexico were studied at two different sites along the southern border of California. All these studies represent the empirical basis for suggesting surface seismic has a significant role to play in tunnel detection and that methods are under development and very nearly at hand that will provide an effective tool in appraising and maintaining parameter security. As broadband sources, gravity-coupled towed spreads, and automated analysis software continues to make advancements, so does the applicability of routine deployment of seismic imaging systems that can be operated by technicians with interpretation aids for nearly real-time target selection. Key to making these systems commercial is the development of enhanced imaging techniques in geologically noisy areas and highly variable surface terrain.

  17. Seismic Interface Waves in Coastal Waters: A Review

    DTIC Science & Technology

    1980-11-15

    Being at the low- 4 frequency end of classical sonar activity and at the high-frequency end of seismic research, the propagation of infrasonic energy...water areas. Certainly this and other seismic detection methods will never replace the highly-developed sonar techniques but in coastal waters they...for many sonar purposes [5, 85 to 90) shows that very simple bottom models may already be sufficient to make allowance for the influence of the sea

  18. Korea Integrated Seismic System tool(KISStool) for seismic monitoring and data sharing at the local data center

    NASA Astrophysics Data System (ADS)

    Park, J.; Chi, H. C.; Lim, I.; Jeong, B.

    2011-12-01

    The Korea Integrated Seismic System(KISS) is a back-bone seismic network which distributes seismic data to different organizations in near-real time at Korea. The association of earthquake monitoring institutes has shared their seismic data through the KISS from 2003. Local data centers operating remote several stations need to send their free field seismic data to NEMA(National Emergency Management Agency) by the law of countermeasure against earthquake hazard in Korea. It is very important the efficient tool for local data centers which want to rapidly detect local seismic intensity and to transfer seismic event information toward national wide data center including PGA, PGV, dominant frequency of P-wave, raw data, and etc. We developed the KISStool(Korea Integrated Seismic System tool) for easy and convenient operation seismic network in local data center. The KISStool has the function of monitoring real time waveforms by clicking station icon on the Google map and real time variation of PGA, PGV, and other data by opening the bar type monitoring section. If they use the KISStool, any local data center can transfer event information to NEMA(National Emergency Management Agency), KMA(Korea Meteorological Agency) or other institutes through the KISS using UDP or TCP/IP protocols. The KISStool is one of the most efficient methods to monitor and transfer earthquake event at local data center in Korea. KIGAM will support this KISStool not only to the member of the monitoring association but also local governments.

  19. Using seismic derived lithology parameters for hydrocarbon indication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Riel, P.; Sisk, M.

    1996-08-01

    The last two decades have shown a strong increase in the use of seismic amplitude information for direct hydrocarbon indication. However, working with seismic amplitudes (and seismic attributes) has several drawbacks: tuning effects must be handled; quantitative analysis is difficult because seismic amplitudes are not directly related to lithology; and seismic amplitudes are reflection events, making it is unclear if amplitude changes relate to lithology variations above or below the interface. These drawbacks are overcome by working directly on seismic derived lithology data, lithology being a layer property rather than an interface property. Technology to extract lithology from seismic datamore » has made great strides, and a large range of methods are now available to users including: (1) Bandlimited acoustic impedance (AI) inversion; (2) Reconstruction of the low AI frequencies from seismic velocities, from spatial well log interpolation, and using constrained sparse spike inversion techniques; (3) Full bandwidth reconstruction of multiple lithology properties (porosity, sand fraction, density etc.,) in time and depth using inverse modeling. For these technologies to be fully leveraged, accessibility by end users is critical. All these technologies are available as interactive 2D and 3D workstation applications, integrated with seismic interpretation functionality. Using field data examples, we will demonstrate the impact of these different approaches on deriving lithology, and in particular show how accuracy and resolution is increased as more geologic and well information is added.« less

  20. Considering potential seismic sources in earthquake hazard assessment for Northern Iran

    NASA Astrophysics Data System (ADS)

    Abdollahzadeh, Gholamreza; Sazjini, Mohammad; Shahaky, Mohsen; Tajrishi, Fatemeh Zahedi; Khanmohammadi, Leila

    2014-07-01

    Located on the Alpine-Himalayan earthquake belt, Iran is one of the seismically active regions of the world. Northern Iran, south of Caspian Basin, a hazardous subduction zone, is a densely populated and developing area of the country. Historical and instrumental documented seismicity indicates the occurrence of severe earthquakes leading to many deaths and large losses in the region. With growth of seismological and tectonic data, updated seismic hazard assessment is a worthwhile issue in emergency management programs and long-term developing plans in urban and rural areas of this region. In the present study, being armed with up-to-date information required for seismic hazard assessment including geological data and active tectonic setting for thorough investigation of the active and potential seismogenic sources, and historical and instrumental events for compiling the earthquake catalogue, probabilistic seismic hazard assessment is carried out for the region using three recent ground motion prediction equations. The logic tree method is utilized to capture epistemic uncertainty of the seismic hazard assessment in delineation of the seismic sources and selection of attenuation relations. The results are compared to a recent practice in code-prescribed seismic hazard of the region and are discussed in detail to explore their variation in each branch of logic tree approach. Also, seismic hazard maps of peak ground acceleration in rock site for 475- and 2,475-year return periods are provided for the region.

  1. MSNoise: a Python Package for Monitoring Seismic Velocity Changes using Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Lecocq, T.; Caudron, C.; Brenguier, F.

    2013-12-01

    Earthquakes occur every day all around the world and are recorded by thousands of seismic stations. In between earthquakes, stations are recording "noise". In the last 10 years, the understanding of this noise and its potential usage have been increasing rapidly. The method, called "seismic interferometry", uses the principle that seismic waves travel between two recorders and are multiple-scattered in the medium. By cross-correlating the two records, one gets an information on the medium below/between the stations. The cross-correlation function (CCF) is a proxy to the Green Function of the medium. Recent developments of the technique have shown those CCF can be used to image the earth at depth (3D seismic tomography) or study the medium changes with time. We present MSNoise, a complete software suite to compute relative seismic velocity changes under a seismic network, using ambient seismic noise. The whole is written in Python, from the monitoring of data archives, to the production of high quality figures. All steps have been optimized to only compute the necessary steps and to use 'job'-based processing. We present a validation of the software on a dataset acquired during the UnderVolc[1] project on the Piton de la Fournaise Volcano, La Réunion Island, France, for which precursory relative changes of seismic velocity are visible for three eruptions betwee 2009 and 2011.

  2. Comment on "How can seismic hazard around the New Madrid seismic zone be similar to that in California?" by Arthur Frankel

    USGS Publications Warehouse

    Wang, Z.; Shi, B.; Kiefer, J.D.

    2005-01-01

    PSHA is the method used most to assess seismic hazards for input into various aspects of public and financial policy. For example, PSHA was used by the U.S. Geological Survey to develop the National Seismic Hazard Maps (Frankel et al., 1996, 2002). These maps are the basis for many national, state, and local seismic safety regulations and design standards, such as the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, the International Building Code, and the International Residential Code. Adoption and implementation of these regulations and design standards would have significant impacts on many communities in the New Madrid area, including Memphis, Tennessee and Paducah, Kentucky. Although "mitigating risks to society from earthquakes involves economic and policy issues" (Stein, 2004), seismic hazard assessment is the basis. Seismologists should provide the best information on seismic hazards and communicate them to users and policy makers. There is a lack of effort in communicating the uncertainties in seismic hazard assessment in the central U.S., however. Use of 10%, 5%, and 2% PE in 50 years causes confusion in communicating seismic hazard assessment. It would be easy to discuss and understand the design ground motions if the true meaning of the ground motion derived from PSHA were presented, i.e., the ground motion with the estimated uncertainty or the associated confidence level.

  3. A Discrete Element Method Approach to Progressive Localization of Damage in Granular Rocks and Associated Seismicity

    NASA Astrophysics Data System (ADS)

    Vora, H.; Morgan, J.

    2017-12-01

    Brittle failure in rock under confined biaxial conditions is accompanied by release of seismic energy, known as acoustic emissions (AE). The objective our study is to understand the influence of elastic properties of rock and its stress state on deformation patterns, and associated seismicity in granular rocks. Discrete Element Modeling is used to simulate biaxial tests on granular rocks of defined grain size distribution. Acoustic Energy and seismic moments are calculated from microfracture events as rock is taken to conditions of failure under different confining pressure states. Dimensionless parameters such as seismic b-value and fractal parameter for deformation, D-value, are used to quantify seismic character and distribution of damage in rock. Initial results suggest that confining pressure has the largest control on distribution of induced microfracturing, while fracture energy and seismic magnitudes are highly sensitive to elastic properties of rock. At low confining pressures, localized deformation (low D-values) and high seismic b-values are observed. Deformation at high confining pressures is distributed in nature (high D-values) and exhibit low seismic b-values as shearing becomes the dominant mode of microfracturing. Seismic b-values and fractal D-values obtained from microfracturing exhibit a linear inverse relationship, similar to trends observed in earthquakes. Mode of microfracturing in our simulations of biaxial compression tests show mechanistic similarities to propagation of fractures and faults in nature.

  4. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.

  5. GPS-determined Crustal Deformation of South Korea after the 2011 Tohoku-Oki Earthquake: Straining Heterogeneity and Seismicity

    NASA Astrophysics Data System (ADS)

    Ree, J. H.; Kim, S.; Yoon, H. S.; Choi, B. K.; Park, P. H.

    2017-12-01

    The GPS-determined, pre-, co- and post-seismic crustal deformations of the Korean peninsula with respect to the 2011 Tohoku-Oki earthquake (Baek et al., 2012, Terra Nova; Kim et al., 2015, KSCE Jour. of Civil Engineering) are all stretching ones (extensional; horizontal stretching rate larger than horizontal shortening rate). However, focal mechanism solutions of earthquakes indicate that South Korea has been at compressional regime dominated by strike- and reverse-slip faultings. We reevaluated the velocity field of GPS data to see any effect of the Tohoku-Oki earthquake on the Korean crustal deformation and seismicity. To calculate the velocity gradient tensor of GPS sites, we used a gridding method based on least-square collocation (LSC). This LSC method can overcome shortcomings of the segmentation methods including the triangulation method. For example, an undesirable, abrupt change in components of velocity field occurs at segment boundaries in the segmentation methods. It is also known that LSC method is more useful in evaluating deformation patterns in intraplate areas with relatively small displacements. Velocity vectors of South Korea, pointing in general to 113° before the Tohoku-Oki earthquake, instantly changed their direction toward the epicenter (82° on average) during the Tohoku-Oki earthquake, and then gradually returned to the original position about 2 years after the Tohoku-Oki earthquake. Our calculation of velocity gradient tensors after the Tohoku-Oki earthquake shows that the stretching and rotating fields are quite heterogeneous, and that both stretching and shortening areas exist in South Korea. In particular, after the post-seismic relaxation ceased (i.e., from two years after the Tohoku-Oki earthquake), regions with thicker and thinner crusts tend to be shortening and stretching, respectively, in South Korea. Furthermore, the straining rate is larger in the regions with thinner crust. Although there is no meaningful correlation between seismicity and crustal straining pattern of South Korea at present, the seismicity tends to be localized along boundaries between areas with opposite vorticity, particularly for velocity field for one year after the Tohoku-Oki earthquake.

  6. Relationship between the frequency magnitude distribution and the visibility graph in the synthetic seismicity generated by a simple stick-slip system with asperities.

    PubMed

    Telesca, Luciano; Lovallo, Michele; Ramirez-Rojas, Alejandro; Flores-Marquez, Leticia

    2014-01-01

    By using the method of the visibility graph (VG) the synthetic seismicity generated by a simple stick-slip system with asperities is analysed. The stick-slip system mimics the interaction between tectonic plates, whose asperities are given by sandpapers of different granularity degrees. The VG properties of the seismic sequences have been put in relationship with the typical seismological parameter, the b-value of the Gutenberg-Richter law. Between the b-value of the synthetic seismicity and the slope of the least square line fitting the k-M plot (relationship between the magnitude M of each synthetic event and its connectivity degree k) a close linear relationship is found, also verified by real seismicity.

  7. Imaging near surface mineral targets with ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Dales, P.; Audet, P.; Olivier, G.

    2017-12-01

    To keep up with global metal and mineral demand, new ore-deposits have to be discovered on a regular basis. This task is becoming increasingly difficult, since easily accessible deposits have been exhausted to a large degree. The typical procedure for mineral exploration begins with geophysical surveys followed by a drilling program to investigate potential targets. Since the retrieved drill core samples are one-dimensional observations, the many holes needed to interpolate and interpret potential deposits can lead to very high costs. To reduce the amount of drilling, active seismic imaging is sometimes used as an intermediary, however the active sources (e.g. large vibrating trucks or explosive shots) are expensive and unsuitable for operation in remote or environmentally sensitive areas. In recent years, passive seismic imaging using ambient noise has emerged as a novel, low-cost and environmentally sensitive approach for exploring the sub-surface. This technique dispels with active seismic sources and instead uses ambient seismic noise such as ocean waves, traffic or minor earthquakes. Unfortunately at this point, passive surveys are not capable of reaching the required resolution to image the vast majority of the ore-bodies that are being explored. In this presentation, we will show the results of an experiment where ambient seismic noise recorded on 60 seismic stations was used to image a near-mine target. The target consists of a known ore-body that has been partially exhausted by mining efforts roughly 100 years ago. The experiment examined whether ambient seismic noise interferometry can be used to image the intact and exhausted ore deposit. A drilling campaign was also conducted near the target which offers the opportunity to compare the two methods. If the accuracy and resolution of passive seismic imaging can be improved to that of active surveys (and beyond), this method could become an inexpensive intermediary step in the exploration process and result in a large decrease in the amount of drilling required to investigate and identify high-grade ore deposits.

  8. Seismic Fragility Analysis of a Condensate Storage Tank with Age-Related Degradations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, J.; Braverman, J.; Hofmayer, C

    2011-04-01

    The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structuresmore » and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. This report describes the research effort performed by BNL for the Year 4 scope of work. This report was developed as an update to the Year 3 report by incorporating a major supplement to the Year 3 fragility analysis. In the Year 4 research scope, an additional study was carried out to consider an additional degradation scenario, in which the three basic degradation scenarios, i.e., degraded tank shell, degraded anchor bolts, and cracked anchorage concrete, are combined in a non-perfect correlation manner. A representative operational water level is used for this effort. Building on the same CDFM procedure implemented for the Year 3 Tasks, a simulation method was applied using optimum Latin Hypercube samples to characterize the deterioration behavior of the fragility capacity as a function of age-related degradations. The results are summarized in Section 5 and Appendices G through I.« less

  9. Seismic inversion for incoming sedimentary sequence in the Nankai Trough margin off Kumano Basin, southwest Japan

    NASA Astrophysics Data System (ADS)

    Naito, K.; Park, J.

    2012-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake mechanism. Huge earthquakes have been repeated in the cycle of 100-150 years in the area, and in these days the next emergence of the earthquake becomes one of the most serious issue in Japan. Therefore, detailed descriptions of geological structure are urgently needed there. IODP (Integrated Ocean Drilling Program) have investigated this area in the NanTroSEIZE science plan. Seismic reflection, core sampling and borehole logging surveys have been executed during the NanTroSEIZE expeditions. Core-log-seismic data integration (CLSI) is useful for understanding the Nankai seismogenic zone. We use the seismic inversion method to do the CLSI. The seismic inversion (acoustic impedance inversion, A.I. inversion) is a method to estimate rock physical properties using seismic reflection and logging data. Acoustic impedance volume is inverted for seismic data with density and P-wave velocity of several boreholes with the technique. We use high-resolution 3D multi-channel seismic (MCS) reflection data obtained during KR06-02 cruise in 2006, and measured core sample properties by IODP Expeditions 322 and 333. P-wave velocities missing for some core sample are interpolated by the relationship between acoustic impedance and P-wave velocity. We used Hampson-Russell software for the seismic inversion. 3D porosity model is derived from the 3D acoustic impedance model to figure out rock physical properties of the incoming sedimentary sequence in the Nankai Trough off Kumano Basin. The result of our inversion analysis clearly shows heterogeneity of sediments; relatively high porosity sediments on the shallow layer of Kashinosaki Knoll, and distribution of many physical anomaly bands on volcanic and turbidite sediment layers around the 3D MCS survey area. In this talk, we will show 3D MCS, acoustic impedance, and porosity data for the incoming sedimentary sequence and discuss its possible implications for the Nankai seismogenic behavior.

  10. Storage of fluids and melts at subduction zones detectable by seismic tomography

    NASA Astrophysics Data System (ADS)

    Luehr, B. G.; Koulakov, I.; Rabbel, W.; Brotopuspito, K. S.; Surono, S.

    2015-12-01

    During the last decades investigations at active continental margins discovered the link between the subduction of fluid saturated oceanic plates and the process of ascent of these fluids and partial melts forming a magmatic system that leads to volcanism at the earth surface. For this purpose the geophysical structure of the mantle and crustal range above the down going slap has been imaged. Information is required about the slap, the ascent paths, as well as the reservoires of fluids and partial melts in the mantle and the crust up to the volcanoes at the surface. Statistically the distance between the volcanoes of volcanic arcs down to their Wadati Benioff zone results of approximately 100 kilometers in mean value. Surprisingly, this depth range shows pronounced seismicity at most of all subduction zones. Additionally, mineralogical laboratory investigations have shown that dehydration of the diving plate has a maximum at temperature and pressure conditions we find at around 100 km depth. The ascent of the fluids and the appearance of partial melts as well as the distribution of these materials in the crust can be resolved by seismic tomographic methods using records of local natural seismicity. With these methods these areas are corresponding to lowered seismic velocities, high Vp/Vs ratios, as well as increased attenuation of seismic shear waves. The anomalies and their time dependence are controlled by the fluids. The seismic velocity anomalies detected so far are within a range of a few per cent to more than 30% reduction. But, to explore plate boundaries large and complex amphibious experiments are required, in which active and passive seismic investigations should be combined to achieve best results. The seismic station distribution should cover an area from before the trench up to far behind the volcanic chain, to provide under favorable conditions information down to 150 km depth. Findings of different subduction zones will be compared and discussed.

  11. Finite-Difference Numerical Simulation of Seismic Gradiometry

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Symons, N. P.; Haney, M. M.

    2006-12-01

    We use the phrase seismic gradiometry to refer to the developing research area involving measurement, modeling, analysis, and interpretation of spatial derivatives (or differences) of a seismic wavefield. In analogy with gradiometric methods used in gravity and magnetic exploration, seismic gradiometry offers the potential for enhancing resolution, and revealing new (or hitherto obscure) information about the subsurface. For example, measurement of pressure and rotation enables the decomposition of recorded seismic data into compressional (P) and shear (S) components. Additionally, a complete observation of the total seismic wavefield at a single receiver (including both rectilinear and rotational motions) offers the possibility of inferring the type, speed, and direction of an incident seismic wave. Spatially extended receiver arrays, conventionally used for such directional and phase speed determinations, may be dispensed with. Seismic wave propagation algorithms based on the explicit, time-domain, finite-difference (FD) numerical method are well-suited for investigating gradiometric effects. We have implemented in our acoustic, elastic, and poroelastic algorithms a point receiver that records the 9 components of the particle velocity gradient tensor. Pressure and particle rotation are obtained by forming particular linear combinations of these tensor components, and integrating with respect to time. All algorithms entail 3D O(2,4) FD solutions of coupled, first- order systems of partial differential equations on uniformly-spaced staggered spatial and temporal grids. Numerical tests with a 1D model composed of homogeneous and isotropic elastic layers show isolation of P, SV, and SH phases recorded in a multiple borehole configuration, even in the case of interfering events. Synthetic traces recorded by geophones and rotation receivers in a shallow crosswell geometry with randomly heterogeneous poroelastic models also illustrate clear P (fast and slow) and S separation. Finally, numerical tests of the "point seismic array" concept are oriented toward understanding its potential and limitations. Sandia National Laboratories is a multiprogram science and engineering facility operated by Sandia Corporation, a Lockheed-Martin company, for the United States Department of Energy under contract DE- AC04-94AL85000.

  12. Mineral texture based seismic properties of meta-sedimentary and meta-igneous rocks in the orogenic wedge of the Central Scandinavian Caledonides

    NASA Astrophysics Data System (ADS)

    Almqvist, B. S. G.; Czaplinska, D.; Piazolo, S.

    2015-12-01

    Progress in seismic methods offers the possibility to visualize in ever greater detail the structure and composition of middle to lower continental crust. Ideally, the seismic parameters, including compressional (Vp) and shear (Vs) wave velocities, anisotropy and Vp/Vs-ratio, allow the inference of detailed and quantitative information on the deformation conditions, chemical composition, temperature and the amount and geometry of fluids and melts in the crust. However, such inferences regarding the crust should be calibrated with known mineral and rock physical properties. Seismic properties calculated from the crystallographic preferred orientation (CPO) and laboratory measurements on representative core material allow us to quantify the interpretations from seismic data. The challenge of such calibrations lies in the non-unique interpretation of seismic data. A large catalogue of physical rock properties is therefore useful, with as many constraining geophysical parameters as possible (including anisotropy and Vp/Vs ratio). We present new CPO data and modelled seismic properties for amphibolite and greenschist grade rocks representing the orogenic wedge in the Central Scandinavian Caledonides. Samples were collected from outcrops in the field and from a 2.5 km long drill core, which penetrated an amphibolite-grade allochthonous unit composed of meta-sedimentary and meta-igneous rocks, as well as mica and chlorite-rich mylonites. The textural data was acquired using large area electron backscatter diffraction (EBSD) maps, and the chemical composition of minerals obtained by energy dispersive x-ray (EDS). Based on the texture data, we compare and evaluate some of the existing methods to calculate texture-based seismic properties of rocks. The suite of samples consists of weakly anisotropic rocks such as felsic gneiss and calc-silicates, and more anisotropic amphibolite, metagabbro, mica-schist. The newly acquired dataset provides a range of seismic properties that improves compositional and structural characterization of deformed middle and lower crust.

  13. A one year long continuous record of seismic activity and surface motion at the tongue of Rhonegletscher (Valais, Switzerland)

    NASA Astrophysics Data System (ADS)

    Dalban Canassy, Pierre; Röösli, Claudia; Walter, Fabian; Gabbi, Jeannette

    2014-05-01

    A critical gap in our current understanding of glaciers is how high sub-glacial water pressure controls the coupling of the glacier to its bed. Processes at the base of a glacier are inherently difficult to investigate due to their remoteness. Investigation of the sub-glacial environment with passive seismic methods is an innovative, rapidly growing interdisciplinary and promising endeavor. In combination with observations of surface motion and basal water pressure, this method is ideally suited to localize and quantify frictional and fracture processes which occur during periods of rapidly changing sub-glacial water pressure with consequent stress redistribution at the contact interface between ice and bed. Here we present the results of the first one-year-long glacier seismic monitoring performed on an Alpine glacier to our knowledge. Together with records of surface motion and hydrological measurements, we examine whether seasonal changes can be captured by seismic recording. Experiments were carried out from June 2012 to July 2013 on Rhonegletscher (Valais, Switzerland), by means of 3 three-components seismometers settled close to the tongue in 2 meters boreholes. An additional array of eleven sensors installed at the ice surface was also maintained during September 2012, in order to achieve more accurate icequakes locations. A high seismic emission is observed on Rhonegletscher, with icequakes located close to the surface or in the vicinity of the bedrock. The temporal distribution of seismic activity is shown to nicely reflect the seasonal evolution of the glacier hydrology, with a dramatic seismic release in early spring. During summer, released seismic activity is generally driven by diurnal ice/snow melting cycle. In winter, snow-cover conditions are associated with a reduced seismic release, with nevertheless some unexpected activity possibly related to snow-pack metamorphism. Based on icequake locations derived from data recorded in September, we discuss seasonal changes of the icequakes hypocenters distribution and possible source mechanisms are proposed.

  14. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  15. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    NASA Astrophysics Data System (ADS)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  16. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  17. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  18. Automatic Processing and Interpretation of Long Records of Endogenous Micro-Seismicity: the Case of the Super-Sauze Soft-Rock Landslide.

    NASA Astrophysics Data System (ADS)

    Provost, F.; Malet, J. P.; Hibert, C.; Doubre, C.

    2017-12-01

    The Super-Sauze landslide is a clay-rich landslide located the Southern French Alps. The landslide exhibits a complex pattern of deformation: a large number of rockfalls are observed in the 100 m height main scarp while the deformation of the upper part of the accumulated material is mainly affected by material shearing along stable in-situ crests. Several fissures are locally observed. The shallowest layer of the accumulated material tends to behave in a brittle manner but may undergo fluidization and/or rapid acceleration. Previous studies have demonstrated the presence of a rich endogenous micro-seismicity associated to the deformation of the landslide. However, the lack of long-term seismic records and suitable processing chains prevented a full interpretation of the links between the external forcings, the deformation and the recorded seismic signals. Since 2013, two permanent seismic arrays are installed in the upper part of the landslide. We here present the methodology adopted to process this dataset. The processing chain consists of a set of automated methods for automatic and robust detection, classification and location of the recorded seismicity. Thousands of events are detected and further automatically classified. The classification method is based on the description of the signal through attributes (e.g. waveform, spectral content properties). These attributes are used as inputs to classify the signal using a Random Forest machine-learning algorithm in four classes: endogenous micro-quakes, rockfalls, regional earthquakes and natural/anthropogenic noises. The endogenous landslide sources (i.e. micro-quake and rockfall) are further located. The location method is adapted to the type of event. The micro-quakes are located with a 3D velocity model derived from a seismic tomography campaign and an optimization of the first arrival picking with the inter-trace correlation of the P-wave arrivals. The rockfalls are located by optimizing the inter-trace correlation of the whole signal. We analyze the temporal relationships of the endogenous seismic events with rainfall and landslide displacements. Sub-families of landslide micro-quakes are also identified and an interpretation of their source mechanism is proposed from their signal properties and spatial location.

  19. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of the source, which can be used, by means of the small-intensity precursors, for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise.

  20. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  1. Can high seismic b-values be explained solely by poorly applied methodology?

    NASA Astrophysics Data System (ADS)

    Roberts, Nick; Bell, Andrew; Main, Ian

    2015-04-01

    The b-value of the Gutenberg-Richter distribution quantifies the relative proportion of large to small magnitude earthquakes in a catalogue, in turn related to the population of fault rupture areas and the average slip or stress drop. Accordingly the b-value is an important parameter to consider when evaluating seismic catalogues as it has the potential to provide insight into the temporal or spatial evolution of the system, such as fracture development or changes in the local stress regime. The b-value for tectonic seismicity is commonly found to be close to 1, whereas much higher b-values are frequently reported for volcanic and induced seismicity. Understanding these differences is important for understanding the processes controlling earthquake occurrence in different settings. However, it is possible that anomalously high b-values could arise from small sample sizes, under-estimated completeness magnitudes, or other poorly applied methodologies. Therefore, it is important to establish a rigorous workflow for analyzing these datasets. Here we examine the frequency-magnitude distributions of volcanic earthquake catalogues in order to determine the significance of apparently high b-values. We first derive a workflow for computing the completeness magnitude of a seismic catalogue, using synthetic catalogues of varying shape, size, and known b-value. We find the best approach involves a combination of three methods: 'Maximum Curvature', 'b-value stability', and the 'Goodness-of-Fit test'. To calculate a reliable b-value with an error ≤0.25, the maximum curvature method is preferred for a 'sharp-peaked' discrete distribution. For a catalogue with a broader peak the b-value stability method is the most reliable with the Goodness-of-Fit test being an acceptable backup if the b-value stability method fails. We apply this workflow to earthquake catalogues from El Hierro (2011-2013) and Mt Etna (1999-2013) volcanoes. In general, we find the b-value to be equal to or slightly greater than 1, however, reliably high b-values are reported in both catalogues. We argue that many of the almost axiomatically 'high' b-values reported in the literature for volcanic and induced seismicity may be attributable to biases introduced by the methods of inference used and/or the relatively small sample sizes often available. This new methodology, although focused towards volcanic catalogues, is applicabale to all seismic catalogues.

  2. A seismic analysis for masonry constructions: The different schematization methods of masonry walls

    NASA Astrophysics Data System (ADS)

    Olivito, Renato. S.; Codispoti, Rosamaria; Scuro, Carmelo

    2017-11-01

    Seismic analysis of masonry structures is usually analyzed through the use of structural calculation software based on equivalent frames method or to macro-elements method. In these approaches, the masonry walls are divided into vertical elements, masonry walls, and horizontal elements, so-called spandrel elements, interconnected by rigid nodes. The aim of this work is to make a critical comparison between different schematization methods of masonry wall underlining the structural importance of the spandrel elements. In order to implement the methods, two different structural calculation software were used and an existing masonry building has been examined.

  3. Reconstructing the Seismic Wavefield using Curvelets and Distributed Acoustic Sensing

    NASA Astrophysics Data System (ADS)

    Muir, J. B.; Zhan, Z.

    2017-12-01

    Distributed Acoustic Sensing (DAS) offers an opportunity to produce cost effective and uniquely dense images of the surface seismic wavefield - DAS also produces extremely large data volumes that require innovative methods of data reduction and seismic parameter inversion to handle efficiently. We leverage DAS and the super-Nyquist sampling enabled by compressed sensing of the wavefield in the curvelet domain to produce accurate images of the horizontal velocity within a target region, using only short ( 1-10 minutes) records of either active seismic sources or ambient seismic signals. Once the wavefield has been fully described, modern "tomographic" techniques, such as Helmholtz tomography or Wavefield Gradiometry, can be employed to determine seismic parameters of interest such as phase velocity. An additional practical benefit of employing a wavefield reconstruction step is that multiple heterogeneous forms of instrumentation can be naturally combined - therefore in this study we also explore the addition of three component nodal seismic data into the reconstructed wavefield. We illustrate these techniques using both synthetic examples and data taken from the Brady Geothermal Field in Nevada during the PoroTomo (U. Wisconsin Madison) experiment of 2016.

  4. An Integrated Approach for the Large-Scale Simulation of Sedimentary Basins to Study Seismic Wave Amplification

    NASA Astrophysics Data System (ADS)

    Poursartip, B.

    2015-12-01

    Seismic hazard assessment to predict the behavior of infrastructures subjected to earthquake relies on ground motion numerical simulation because the analytical solution of seismic waves is limited to only a few simple geometries. Recent advances in numerical methods and computer architectures make it ever more practical to reliably and quickly obtain the near-surface response to seismic events. The key motivation stems from the need to access the performance of sensitive components of the civil infrastructure (nuclear power plants, bridges, lifelines, etc), when subjected to realistic scenarios of seismic events. We discuss an integrated approach that deploys best-practice tools for simulating seismic events in arbitrarily heterogeneous formations, while also accounting for topography. Specifically, we describe an explicit forward wave solver based on a hybrid formulation that couples a single-field formulation for the computational domain with an unsplit mixed-field formulation for Perfectly-Matched-Layers (PMLs and/or M-PMLs) used to limit the computational domain. Due to the material heterogeneity and the contrasting discretization needs it imposes, an adaptive time solver is adopted. We use a Runge-Kutta-Fehlberg time-marching scheme that adjusts optimally the time step such that the local truncation error rests below a predefined tolerance. We use spectral elements for spatial discretization, and the Domain Reduction Method in accordance with double couple method to allow for the efficient prescription of the input seismic motion. Of particular interest to this development is the study of the effects idealized topographic features have on the surface motion when compared against motion results that are based on a flat-surface assumption. We discuss the components of the integrated approach we followed, and report the results of parametric studies in two and three dimensions, for various idealized topographic features, which show motion amplification that depends, as expected, on the relation between the topographic feature's characteristics and the dominant wavelength. Lastly, we report results involving three-dimensional simulations.

  5. Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework

    NASA Astrophysics Data System (ADS)

    Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.

    2015-12-01

    Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.

  6. Time-marching multi-grid seismic tomography

    NASA Astrophysics Data System (ADS)

    Tong, P.; Yang, D.; Liu, Q.

    2016-12-01

    From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.

  7. Putting the slab back: First steps of creating a synthetic seismic section of subducted lithosphere

    NASA Astrophysics Data System (ADS)

    Zertani, S.; John, T.; Tilmann, F. J.; Leiss, B.; Labrousse, L.; Andersen, T. B.

    2016-12-01

    Imaging subducted lithosphere is a difficult task which is usually tackled with geophysical methods. To date, the most promising method is receiver function imaging (RF), which concentrates on first order conversions from p- to s-waves at boundaries (e.g. lithological and structural) with contrasting seismic velocities. The resolution is high for the upper parts of the subducting material. However, in greater depths (40-80 km) the visualization of the subducted slab becomes increasingly blurry, until the slab cannot be distinguished from Earth's mantle anymore, rendering a visualization impossible. This blurry zone is thought to occur due to advancing eclogitization of the subducting slab. However, it is not well understood how micro- to macro-scale structures related to progressive eclogitization affect RF signals. The island of Holsnoy in the Bergen Arcs of western Norway represents a partially eclogitized formerly subducted block of lower crust and serves as an analogue to the aforementioned blurry zone in RF images. This eclogitization can be observed in static fluid induced eclogitization patches or fingers, but is mainly present in localized shear zones of variable sizes (mm to 100s of meters). We mapped the area to gain a better understanding of the geometries of such shear zones, which could possibly function as seismic reflectors. Further, we calculated seismic velocities from thermodynamic modelling on the basis of XRF whole rock analysis and compared these results to velocities calculated from a combination of thin section information, EMPA and physical mineral properties (Voigt-Reuss-Hill averaging). Both methods yield consistent results for p- and s-wave velocities of eclogites and granulites from Holsnoy. In combination with X-ray measurements to identify the microtextures of the characteristic samples to incorporate seismic anisotropy caused by e.g. foliation or lineation, these seismic velocities are used as an input for seismic models to reconstruct the progressive eclogitization of a subducting slab as seen in many RF-images (i.e. blurry zone).

  8. The Investigation of a Sinkhole Area in Germany by Near-Surface Active Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Tschache, S.; Becker, D.; Wadas, S. H.; Polom, U.; Krawczyk, C. M.

    2017-12-01

    In November 2010, a 30 m wide and 17 m deep sinkhole occurred in a residential area of Schmalkalden, Germany, which fortunately did not harm humans, but led to damage of buildings and property. Subsequent geoscientific investigations showed that the collapse was naturally caused by the subrosion of sulfates in a depth of about 80 m. In 2012, an early warning system was established including 3C borehole geophones deployed in 50 m depth around the backfilled sinkhole. During the acquisition of two shallow 2D shear wave seismic profiles, the signals generated by a micro-vibrator at the surface were additionally recorded by the four borehole geophones of the early warning system and a VSP probe in a fifth borehole. The travel time analysis of the direct arrivals enhanced the understanding of wave propagation in the area. Seismic velocity anomalies were detected and related to structural seismic images of the 2D profiles. Due to the promising first results, the experiment was further extended by distributing vibration points throughout the whole area around the sinkhole. This time, micro-vibrators for P- and S-wave generation were used. The signals were recorded by the borehole geophones and temporary installed seismometers at surface positions close to the boreholes. The travel times and signal attenuations are evaluated to detect potential instable zones. Furthermore, array analyses are performed. The first results reveal features in the active tomography datasets consistent with structures observed in the 2D seismic images. The advantages of the presented method are the low effort and good repeatability due to the permanently installed borehole geophones. It has the potential to determine P-wave and S-wave velocities in 3D. It supports the interpretation of established investigation methods as 2D surface seismics and VSP. In our further research we propose to evaluate the suitability of the method for the time lapse monitoring of changes in the seismic wave propagation, which could be related to subrosion processes.

  9. Multi-hole seismic modeling in 3-D space and cross-hole seismic tomography analysis for boulder detection

    NASA Astrophysics Data System (ADS)

    Cheng, Fei; Liu, Jiangping; Wang, Jing; Zong, Yuquan; Yu, Mingyu

    2016-11-01

    A boulder stone, a common geological feature in south China, is referred to the remnant of a granite body which has been unevenly weathered. Undetected boulders could adversely impact the schedule and safety of subway construction when using tunnel boring machine (TBM) method. Therefore, boulder detection has always been a key issue demanded to be solved before the construction. Nowadays, cross-hole seismic tomography is a high resolution technique capable of boulder detection, however, the method can only solve for velocity in a 2-D slice between two wells, and the size and central position of the boulder are generally difficult to be accurately obtained. In this paper, the authors conduct a multi-hole wave field simulation and characteristic analysis of a boulder model based on the 3-D elastic wave staggered-grid finite difference theory, and also a 2-D imaging analysis based on first arrival travel time. The results indicate that (1) full wave field records could be obtained from multi-hole seismic wave simulations. Simulation results describe that the seismic wave propagation pattern in cross-hole high-velocity spherical geological bodies is more detailed and can serve as a basis for the wave field analysis. (2) When a cross-hole seismic section cuts through the boulder, the proposed method provides satisfactory cross-hole tomography results; however, when the section is closely positioned to the boulder, such high-velocity object in the 3-D space would impact on the surrounding wave field. The received diffracted wave interferes with the primary wave and in consequence the picked first arrival travel time is not derived from the profile, which results in a false appearance of high-velocity geology features. Finally, the results of 2-D analysis in 3-D modeling space are comparatively analyzed with the physical model test vis-a-vis the effect of high velocity body on the seismic tomographic measurements.

  10. CALIBRATION OF SEISMIC ATTRIBUTES FOR RESERVOIR CHARACTERIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne D. Pennington; Horacio Acevedo; Aaron Green

    2002-10-01

    The project, ''Calibration of Seismic Attributes for Reservoir Calibration,'' is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, including several that are in final stages of preparation ormore » printing; one of these is a chapter on ''Reservoir Geophysics'' for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along ''phantom'' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less

  11. Calibration of Seismic Attributes for Reservoir Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne D. Pennington

    2002-09-29

    The project, "Calibration of Seismic Attributes for Reservoir Characterization," is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, inlcuding several that are in final stages of preparation ormore » printing; one of these is a chapter on "Reservoir Geophysics" for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along 'phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less

  12. Systems for low frequency seismic and infrasound detection of geo-pressure transition zones

    DOEpatents

    Shook, G. Michael; LeRoy, Samuel D.; Benzing, William M.

    2007-10-16

    Methods for determining the existence and characteristics of a gradational pressurized zone within a subterranean formation are disclosed. One embodiment involves employing an attenuation relationship between a seismic response signal and increasing wavelet wavelength, which relationship may be used to detect a gradational pressurized zone and/or determine characteristics thereof. In another embodiment, a method for analyzing data contained within a response signal for signal characteristics that may change in relation to the distance between an input signal source and the gradational pressurized zone is disclosed. In a further embodiment, the relationship between response signal wavelet frequency and comparative amplitude may be used to estimate an optimal wavelet wavelength or range of wavelengths used for data processing or input signal selection. Systems for seismic exploration and data analysis for practicing the above-mentioned method embodiments are also disclosed.

  13. Recent faulting in western Nevada revealed by multi-scale seismic reflection

    USGS Publications Warehouse

    Frary, R.N.; Louie, J.N.; Stephenson, W.J.; Odum, J.K.; Kell, A.; Eisses, A.; Kent, G.M.; Driscoll, N.W.; Karlin, R.; Baskin, R.L.; Pullammanappallil, S.; Liberty, L.M.

    2011-01-01

    The main goal of this study is to compare different reflection methods used to image subsurface structure within different physical environments in western Nevada. With all the methods employed, the primary goal is fault imaging for structural information toward geothermal exploration and seismic hazard estimation. We use seismic CHIRP a swept-frequency marine acquisition system, weight drop an accelerated hammer source, and two different vibroseis systems to characterize fault structure. We focused our efforts in the Reno metropolitan area and the area within and surrounding Pyramid Lake in northern Nevada. These different methods have provided valuable constraints on the fault geometry and activity, as well as associated fluid movement. These are critical in evaluating the potential for large earthquakes in these areas, and geothermal exploration possibilities near these structures. ?? 2011 Society of Exploration Geophysicists.

  14. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  15. Toward 2D Seismic Wavefield Monitoring: Seismic Gradiometry for Long-Period Seismogram and Short-Period Seismogram Envelope applied to the Hi-net Array

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Nishida, K.; Takagi, R.; Obara, K.

    2015-12-01

    The high-sensitive seismograph network Japan (Hi-net) operated by National Research Institute for Earth Science and Disaster Prevention (NIED) has about 800 stations with average separation of 20 km. We can observe long-period seismic wave propagation as a 2D wavefield with station separations shorter than wavelength. In contrast, short-period waves are quite incoherent at stations, however, their envelope shapes resemble at neighbor stations. Therefore, we may be able to extract seismic wave energy propagation by seismogram envelope analysis. We attempted to characterize seismic waveform at long-period and its envelope at short-period as 2D wavefield by applying seismic gradiometry. We applied the seismic gradiometry to a synthetic long-period (20-50s) dataset prepared by numerical simulation in realistic 3D medium at the Hi-net station layout. Wave amplitude and its spatial derivatives are estimated by using data at nearby stations. The slowness vector, the radiation pattern and the geometrical spreading are extracted from estimated velocity, displacement and its spatial derivatives. For short-periods at shorter than 1 s, seismogram envelope shows temporal and spatial broadening through scattering by medium heterogeneity. It is expected that envelope shape may be coherent among nearby stations. Based on this idea, we applied the same method to the time-integration of seismogram envelope to estimate its spatial derivatives. Together with seismogram envelope, we succeeded in estimating the slowness vector from the seismogram envelope as well as long-period waveforms by synthetic test, without using phase information. Our preliminarily results show that the seismic gradiometry suits the Hi-net to extract wave propagation characteristics both at long and short periods. This method is appealing that it can estimate waves at homogeneous grid to monitor seismic wave as a wavefield. It is promising to obtain phase velocity variation from direct waves, and to grasp wave packets originating from scattering from coda, by applying the seismic gradiometry to the Hi-net.

  16. Precisely relocated seismicity using 3-D seismic velocity model by double-difference tomography method and orogenic processes in central and southern Taiwan

    NASA Astrophysics Data System (ADS)

    Nagai, S.; Wu, Y.; Suppe, J.; Hirata, N.

    2009-12-01

    The island of Taiwan is located in the site of ongoing arc-continent collision zone between the Philippine Sea Plate and the Eurasian Plate. Numerous geophysical and geological studies are done in and around Taiwan to develop various models to explain the tectonic processes in the Taiwan region. The active and young tectonics and the associated high seismicity in Taiwan provide us with unique opportunity to explore and understand the processes in the region related to the arc-continent collision. Nagai et al. [2009] imaged eastward dipping alternate high- and low-velocity bodies at depths of 5 to 25 km from the western side of the Central Mountain Range to the eastern part of Taiwan, by double-difference tomography [Zhang and Thurber, 2003] using three temporary seismic networks with the Central Weather Bureau Seismic Network(CWBSN). These three temporary networks are the aftershock observation after the 1999 Chi-Chi Taiwan earthquake and two dense linear array observations; one is across central Taiwan in 2001, another is across southern Taiwan in 2005, respectively. We proposed a new orogenic model, ’Upper Crustal Stacking Model’ inferred from our tomographic images. To understand the detailed seismic structure more, we carry on relocating earthquakes more precisely in central and southern Taiwan, using three-dimensional velocity model [Nagai et al., 2009] and P- and S-wave arrival times both from the CWBSN and three temporary networks. We use the double-difference tomography method to improve relative and absolute location accuracy simultaneously. The relocated seismicity is concentrated and limited along the parts of boundaries between low- and high-velocity bodies. Especially, earthquakes occurred beneath the Eastern Central Range, triggered by 1999 Chi-Chi earthquake, delineate subsurface structural boundaries, compared with profiles of estimated seismic velocity. The relocated catalog and 3-D seismic velocity model give us some constraints to reconstruct the orogenic model in Taiwan. We show these relocated seismicity with P- and S-wave velocity profiles, with focal mechanisms [e.g. Wu et al., 2008] and spatio-temporal variation, in central and southern Taiwan and discuss tectonic processes in Taiwan.

  17. Coherent Waves in Seismic Researches

    NASA Astrophysics Data System (ADS)

    Emanov, A.; Seleznev, V. S.

    2013-05-01

    Development of digital processing algorithms of seismic wave fields for the purpose of useful event picking to study environment and other objects is the basis for the establishment of new seismic techniques. In the submitted paper a fundamental property of seismic wave field coherence is used. The authors extended conception of coherence types of observed wave fields and devised a technique of coherent component selection from observed wave field. Time coherence and space coherence are widely known. In this paper conception "parameter coherence" has been added. The parameter by which wave field is coherent can be the most manifold. The reason is that the wave field is a multivariate process described by a set of parameters. Coherence in the first place means independence of linear connection in wave field of parameter. In seismic wave fields, recorded in confined space, in building-blocks and stratified mediums time coherent standing waves are formed. In prospecting seismology at observation systems with multiple overlapping head waves are coherent by parallel correlation course or, in other words, by one measurement on generalized plane of observation system. For detail prospecting seismology at observation systems with multiple overlapping on basis of coherence property by one measurement of area algorithms have been developed, permitting seismic records to be converted to head wave time sections which have neither reflected nor other types of waves. Conversion in time section is executed on any specified observation base. Energy storage of head waves relative to noise on basis of multiplicity of observation system is realized within area of head wave recording. Conversion on base below the area of wave tracking is performed with lack of signal/noise ratio relative to maximum of this ratio, fit to observation system. Construction of head wave time section and dynamic plots a basis of automatic processing have been developed, similar to CDP procedure in method of reflected waves. With use of developed algorithms of head wave conversion in time sections a work of studying of refracting boundaries in Siberia have been executed. Except for the research by method of refracting waves, the conversion of head waves in time sections, applied to seismograms of reflected wave method, allows to obtain information about refracting horizons in upper part of section in addition to reflecting horizons data. Recovery method of wave field coherent components is the basis of the engineering seismology on the level of accuracy and detail. In seismic microzoning resonance frequency of the upper part of section are determined on the basis of this method. Maps of oscillation amplification and result accuracy are constructed for each of the frequencies. The same method makes it possible to study standing wave field in buildings and constructions with high accuracy and detail, realizing diagnostics of their physical state on set of natural frequencies and form of self-oscillations, examined with high detail. The method of standing waves permits to estimate a seismic stability of structure on new accuracy level.

  18. A Comprehensive Seismic Characterization of the Cove Fort-Sulphurdale Geothermal Site, Utah

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Li, J.; Zhang, X.; Liu, Y.; Kuleli, H. S.; Toksoz, M. N.

    2012-12-01

    The Cove Fort-Sulphurdale geothermal area is located in the transition zone between the extensional Basin and Range Province to the west and the uplifted Colorado Plateau to the east. The region around the geothermal site has the highest heat flow values of over 260 mWm-2 in Utah. To better understand the structure around the geothermal site, the MIT group deployed 10 seismic stations for a period of one year from August 2010. The local seismic network detected over 500 local earthquakes, from which ~200 events located within the network were selected for further analysis. Our seismic analysis is focused on three aspects: seismic velocity and attenuation tomography, seismic event focal mechanism analysis, and seismic shear wave splitting analysis. First P- and S-wave arrivals are picked manually and then the waveform cross-correlation technique is applied to obtain more accurate differential times between event pairs observed on common stations. The double-difference tomography method of Zhang and Thurber (2003) is used to simultaneously determine Vp and Vs models and seismic event locations. For the attenuation tomography, we first calculate t* values from spectrum fitting and then invert them to get Q models based on known velocity models and seismic event locations. Due to the limited station coverage and relatively low signal to noise ratio, many seismic waveforms do not have clear first P arrival polarities and as a result the conventional focal mechanism determination method relying on the polarity information is not applicable. Therefore, we used the full waveform matching method of Li et al. (2010) to determine event focal mechanisms. For the shear wave splitting analysis, we used the cross-correlation method to determine the delay times between fast and slow shear waves and the polarization angles of fast shear waves. The delay times are further taken to image the anisotropy percentage distribution in three dimensions using the shear wave splitting tomography method of Zhang et al. (2007). For the study region, overall the velocity is lower and attenuation is higher in the western part. Correspondingly, the anisotropy is also stronger, indicating the fractures may be more developed in the western part. The average fast polarization directions of fast shear waves at each station mostly point NNE. From the focal mechanism analysis from selected events, it shows that the normal faulting events have strikes in NNE direction, and the events with strike slip mechanism have strikes either parallel with the NNE trending faults or their conjugate ones. Assuming the maximum horizontal stress (SHmax) is parallel with the strike of the normal faulting events and bisects the two fault planes of the strike-slip events, the inverted source mechanism suggests a NNE oriented maximum horizontal stress regime. This area is under W-E tensional stress, which means maximum compressional stress should be in the N-E or NNE direction in general. The combination of shear wave splitting and focal mechanism analysis suggests that in this region the faults and fractures are aligned in the NNE direction.

  19. Post-blasting seismicity in Rudna copper mine, Poland - source parameters analysis.

    NASA Astrophysics Data System (ADS)

    Caputa, Alicja; Rudziński, Łukasz; Talaga, Adam

    2017-04-01

    The really important hazard in Polish copper mines is high seismicity and corresponding rockbursts. Many methods are used to reduce the seismic hazard. Among others the most effective is preventing blasting in potentially hazardous mining panels. The method is expected to provoke small moderate tremors (up to M2.0) and reduce in this way a stress accumulation in the rockmass. This work presents an analysis, which deals with post-blasting events in Rudna copper mine, Poland. Using the Full Moment Tensor (MT) inversion and seismic spectra analysis, we try to find some characteristic features of post blasting seismic sources. Source parameters estimated for post-blasting events are compared with the parameters of not-provoked mining events that occurred in the vicinity of the provoked sources. Our studies show that focal mechanisms of events which occurred after blasts have similar MT decompositions, namely are characterized by a quite strong isotropic component as compared with the isotropic component of not-provoked events. Also source parameters obtained from spectral analysis show that provoked seismicity has a specific source physics. Among others, it is visible from S to P wave energy ratio, which is higher for not-provoked events. The comparison of all our results reveals a three possible groups of sources: a) occurred just after blasts, b) occurred from 5min to 24h after blasts and c) not-provoked seismicity (more than 24h after blasting). Acknowledgements: This work was supported within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  20. Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.

    2010-12-01

    Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.

  1. Quantifying the similarity of seismic polarizations

    NASA Astrophysics Data System (ADS)

    Jones, Joshua P.; Eaton, David W.; Caffagni, Enrico

    2016-02-01

    Assessing the similarities of seismic attributes can help identify tremor, low signal-to-noise (S/N) signals and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values or known seismic sources. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in S/N ratio. Measuring polarization similarity allows easy identification of site noise and sensor misalignment and can help identify coherent noise and emergent or low S/N phase arrivals. Dissimilar azimuths during phase arrivals indicate misaligned horizontal components, dissimilar incidence angles during phase arrivals indicate misaligned vertical components and dissimilar linear polarization may indicate a secondary noise source. Using records of the Mw = 8.3 Sea of Okhotsk earthquake, from Canadian National Seismic Network broad-band sensors in British Columbia and Yukon Territory, Canada, and a vertical borehole array at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Time-frequency polarization similarities of borehole data suggest that a coherent noise source may have persisted above 8 Hz several months after peak resource extraction from a `flowback' type hydraulic fracture.

  2. Tsunamis hazard assessment and monitoring for the Back Sea area

    NASA Astrophysics Data System (ADS)

    Partheniu, Raluca; Ionescu, Constantin; Constantin, Angela; Moldovan, Iren; Diaconescu, Mihail; Marmureanu, Alexandru; Radulian, Mircea; Toader, Victorin

    2016-04-01

    NIEP has improved lately its researches regarding tsunamis in the Black Sea. As part of the routine earthquake and tsunami monitoring activity, the first tsunami early-warning system in the Black Sea has been implemented in 2013 and is active during these last years. In order to monitor the seismic activity of the Black Sea, NIEP is using a total number of 114 real time stations and 2 seismic arrays, 18 of the stations being located in Dobrogea area, area situated in the vicinity of the Romanian Black Sea shore line. Moreover, there is a data exchange with the Black Sea surrounding countries involving the acquisition of real-time data for 17 stations from Bulgaria, Turkey, Georgia and Ukraine. This improves the capability of the Romanian Seismic Network to monitor and more accurately locate the earthquakes occurred in the Black Sea area. For tsunamis monitoring and warning, a number of 6 sea level monitoring stations, 1 infrasound barometer, 3 offshore marine buoys and 7 GPS/GNSS stations are installed in different locations along and near the Romanian shore line. In the framework of ASTARTE project, few objectives regarding the seismic hazard and tsunami waves height assessment for the Black Sea were accomplished. The seismic hazard estimation was based on statistical studies of the seismic sources and their characteristics, compiled using different seismic catalogues. Two probabilistic methods were used for the evaluation of the seismic hazard, the Cornell method, based on the Gutenberg Richter distribution parameters, and Gumbel method, based on extremes statistic. The results show maximum values of possible magnitudes and their recurrence periods, for each seismic source. Using the Tsunami Analysis Tool (TAT) software, a set of tsunami modelling scenarios have been generated for Shabla area, the seismic source that could mostly affect the Romanian shore. These simulations are structured in a database, in order to set maximum possible tsunami waves that could be generated and to establish minimum magnitude values that could trigger tsunamis in this area. Some particularities of Shabla source are: past observed magnitudes > 7 and a recurrence period of 175 years. Some other important objectives of NIEP are to continue the monitoring of the seismic activity of the Black Sea, to improve the data base of the tsunami simulations for this area, near real time fault plane solution estimations used for the warning system, and to add new seismic, GPS/GNSS and sea level monitoring equipment to the existing network. Acknowledgements: This work was partially supported by the FP7 FP7-ENV2013 6.4-3 "Assessment, Strategy And Risk Reduction For Tsunamis in Europe" (ASTARTE) Project 603839/2013 and PNII, Capacity Module III ASTARTE RO Project 268/2014. This work was partially supported by the "Global Tsunami Informal Monitoring Service - 2" (GTIMS2) Project, JRC/IPR/2015/G.2/2006/NC 260286, Ref. Ares (2015)1440256 - 01.04.2015.

  3. A performance goal-based seismic design philosophy for waste repository facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Q.A.

    1994-12-31

    A performance goal-based seismic design philosophy, compatible with DOE`s present natural phenomena hazards mitigation and {open_quotes}graded approach{close_quotes} philosophy, has been proposed for high level nuclear waste repository facilities. The rationale, evolution, and the desirable features of this method have been described. Why and how the method should and can be applied to the design of a repository facility are also discussed.

  4. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  5. Computation of dynamic seismic responses to viscous fluid of digitized three-dimensional Berea sandstones with a coupled finite-difference method.

    PubMed

    Zhang, Yang; Toksöz, M Nafi

    2012-08-01

    The seismic response of saturated porous rocks is studied numerically using microtomographic images of three-dimensional digitized Berea sandstones. A stress-strain calculation is employed to compute the velocities and attenuations of rock samples whose sizes are much smaller than the seismic wavelength of interest. To compensate for the contributions of small cracks lost in the imaging process to the total velocity and attenuation, a hybrid method is developed to recover the crack distribution, in which the differential effective medium theory, the Kuster-Toksöz model, and a modified squirt-flow model are utilized in a two-step Monte Carlo inversion. In the inversion, the velocities of P- and S-waves measured for the dry and water-saturated cases, and the measured attenuation of P-waves for different fluids are used. By using such a hybrid method, both the velocities of saturated porous rocks and the attenuations are predicted accurately when compared to laboratory data. The hybrid method is a practical way to model numerically the seismic properties of saturated porous rocks until very high resolution digital data are available. Cracks lost in the imaging process are critical for accurately predicting velocities and attenuations of saturated porous rocks.

  6. Fast kinematic ray tracing of first- and later-arriving global seismic phases

    NASA Astrophysics Data System (ADS)

    Bijwaard, Harmen; Spakman, Wim

    1999-11-01

    We have developed a ray tracing algorithm that traces first- and later-arriving global seismic phases precisely (traveltime errors of the order of 0.1 s), and with great computational efficiency (15 rays s- 1). To achieve this, we have extended and adapted two existing ray tracing techniques: a graph method and a perturbation method. The two resulting algorithms are able to trace (critically) refracted, (multiply) reflected, some diffracted (Pdiff), and (multiply) converted seismic phases in a 3-D spherical geometry, thus including the largest part of seismic phases that are commonly observed on seismograms. We have tested and compared the two methods in 2-D and 3-D Cartesian and spherical models, for which both algorithms have yielded precise paths and traveltimes. These tests indicate that only the perturbation method is computationally efficient enough to perform 3-D ray tracing on global data sets of several million phases. To demonstrate its potential for non-linear tomography, we have applied the ray perturbation algorithm to a data set of 7.6 million P and pP phases used by Bijwaard et al. (1998) for linearized tomography. This showed that the expected heterogeneity within the Earth's mantle leads to significant non-linear effects on traveltimes for 10 per cent of the applied phases.

  7. Reply

    NASA Astrophysics Data System (ADS)

    Wang, Zhenming; Shi, Baoping; Kiefer, John D.; Woolery, Edward W.

    2004-06-01

    Musson's comments on our article, ``Communicating with uncertainty: A critical issue with probabilistic seismic hazard analysis'' are an example of myths and misunderstandings. We did not say that probabilistic seismic hazard analysis (PSHA) is a bad method, but we did say that it has some limitations that have significant implications. Our response to these comments follows. There is no consensus on exactly how to select seismological parameters and to assign weights in PSHA. This was one of the conclusions reached by a senior seismic hazard analysis committee [SSHAC, 1997] that included C. A. Cornell, founder of the PSHA methodology. The SSHAC report was reviewed by a panel of the National Research Council and was well accepted by seismologists and engineers. As an example of the lack of consensus, Toro and Silva [2001] produced seismic hazard maps for the central United States region that are quite different from those produced by Frankel et al. [2002] because they used different input seismological parameters and weights (see Table 1). We disagree with Musson's conclusion that ``because a method may be applied badly on one occasion does not mean the method itself is bad.'' We do not say that the method is poor, but rather that those who use PSHA need to document their inputs and communicate them fully to the users. It seems that Musson is trying to create myth by suggesting his own methods should be used.

  8. Integral Analysis of Seismic Refraction and Ambient Vibration Survey for Subsurface Profile Evaluation

    NASA Astrophysics Data System (ADS)

    Hazreek, Z. A. M.; Kamarudin, A. F.; Rosli, S.; Fauziah, A.; Akmal, M. A. K.; Aziman, M.; Azhar, A. T. S.; Ashraf, M. I. M.; Shaylinda, M. Z. N.; Rais, Y.; Ishak, M. F.; Alel, M. N. A.

    2018-04-01

    Geotechnical site investigation as known as subsurface profile evaluation is the process of subsurface layer characteristics determination which finally used for design and construction phase. Traditionally, site investigation was performed using drilling technique thus suffers from several limitation due to cost, time, data coverage and sustainability. In order to overcome those problems, this study adopted surface techniques using seismic refraction and ambient vibration method for subsurface profile depth evaluation. Seismic refraction data acquisition and processing was performed using ABEM Terraloc and OPTIM software respectively. Meanwhile ambient vibration data acquisition and processing was performed using CityShark II, Lennartz and GEOPSY software respectively. It was found that studied area consist of two layers representing overburden and bedrock geomaterials based on p-wave velocity value (vp = 300 – 2500 m/s and vp > 2500 m/s) and natural frequency value (Fo = 3.37 – 3.90 Hz) analyzed. Further analysis found that both methods show some good similarity in term of depth and thickness with percentage accuracy at 60 – 97%. Consequently, this study has demonstrated that the application of seismic refractin and ambient vibration method was applicable in subsurface profile depth and thickness estimation. Moreover, surface technique which consider as non-destructive method adopted in this study was able to compliment conventional drilling method in term of cost, time, data coverage and environmental sustainaibility.

  9. Locating scatterers while drilling using seismic noise due to tunnel boring machine

    NASA Astrophysics Data System (ADS)

    Harmankaya, U.; Kaslilar, A.; Wapenaar, K.; Draganov, D.

    2018-05-01

    Unexpected geological structures can cause safety and economic risks during underground excavation. Therefore, predicting possible geological threats while drilling a tunnel is important for operational safety and for preventing expensive standstills. Subsurface information for tunneling is provided by exploratory wells and by surface geological and geophysical investigations, which are limited by location and resolution, respectively. For detailed information about the structures ahead of the tunnel face, geophysical methods are applied during the tunnel-drilling activity. We present a method inspired by seismic interferometry and ambient-noise correlation that can be used for detecting scatterers, such as boulders and cavities, ahead of a tunnel while drilling. A similar method has been proposed for active-source seismic data and validated using laboratory and field data. Here, we propose to utilize the seismic noise generated by a Tunnel Boring Machine (TBM), and recorded at the surface. We explain our method at the hand of data from finite-difference modelling of noise-source wave propagation in a medium where scatterers are present. Using the modelled noise records, we apply cross-correlation to obtain correlation gathers. After isolating the scattered arrivals in these gathers, we cross-correlate again and invert for the correlated traveltime to locate scatterers. We show the potential of the method for locating the scatterers while drilling using noise records due to TBM.

  10. Seismic belt in the upper plane of the double seismic zone extending in the along-arc direction at depths of 70-100km beneath NE Japan, and its relation with the dehydration embrittlement hypothesis

    NASA Astrophysics Data System (ADS)

    Kita, S.; Okada, T.; Nakajima, J.; Matsuzawa, T.; Hasegawa, A.

    2006-12-01

    1. Introduction Dehydration embrittlement or CO2¨Cbearing devolatization embrittlement hypothesis has been proposed as a possible cause of intraslab earthquakes in several studies [e.g., Peacock, 2001; Kirby et al., 1996; Meade and Jeanloz, 1991]. Precise location of intraslab seismicity is needed to discuss its cause in these studies. Recently, a very dense nationwide seismic network (Hi-net) has been constructed by NIED in Japan. In this study, we relocate microearthquakes more precisely by using data obtained by this dense seismic network to detect the characteristic distribution of the seismicity within the Pacific slab beneath Hokkaido and Tohoku, NE Japan. 2. Data and method In the present study, we relocated events at depths of 20¨C300 km for the period from January 2002 to August 2005 from the JMA earthquake catalog. Hypocenter locations and arrival time data in the JMA catalog were used as the initial hypocenters and data for relocations. We applied the double-difference hypocenter location method (DDLM) by Waldhauser and Ellsworth (2000) to the arrival time data of the events. We also checked spatial distribution of the focal mechanisms of the events in the seismic belts and the surrounding upper seismic plane. We used focal mechanism solutions determined by Igarashi et al. (2001). 3. Results and discussion 1) There exist earthquakes occurring in the area between the upper and lower seismic planes (interplane earthquakes), and their focal mechanisms tend to be the down-dip compressional (DC-) type like those of upper plane events. 2) We found a seismic "belt" which is parallel to the iso-depth contour of the plate interface beneath the forearc area at depths of 80¨C100 km. The location of the seismic belt seems to correspond to one phase boundary (from jadeite lawsonite blueschist (H2O content: 5.4 wt% ) to lawsonite amphibole eclogite (3.0wt %) (Hacker et al., 2003)) with dehydration reaction. 3) The location of the deeper limit of seismicity of the upper seismic plane in the slab crust also seems to correspond to another phase boundary (the jadeite lawsonite blueschist to lawsonite amphibole eclogite (Hacker et al., 2003)) with dehydration reaction. 4) Events of the upper seismic plane mainly have down-dip compression type focal mechanisms, but several events have the normal fault type (NF-type) ones, whose spatial distribution seems to correspond to these phase boundaries. These NF events might induced by the tensional stress field, which is caused by the volume reduction due to the dehydration reactions [Kirby et al., 1996; Igarashi et al., 2001].

  11. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  12. Integration of P- and SH-wave high-resolution seismic reflection and micro-gravity techniques to improve interpretation of shallow subsurface structure: New Madrid seismic zone

    USGS Publications Warehouse

    Bexfield, C.E.; McBride, J.H.; Pugin, Andre J.M.; Ravat, D.; Biswas, S.; Nelson, W.J.; Larson, T.H.; Sargent, S.L.; Fillerup, M.A.; Tingey, B.E.; Wald, L.; Northcott, M.L.; South, J.V.; Okure, M.S.; Chandler, M.R.

    2006-01-01

    Shallow high-resolution seismic reflection surveys have traditionally been restricted to either compressional (P) or horizontally polarized shear (SH) waves in order to produce 2-D images of subsurface structure. The northernmost Mississippi embayment and coincident New Madrid seismic zone (NMSZ) provide an ideal laboratory to study the experimental use of integrating P- and SH-wave seismic profiles, integrated, where practicable, with micro-gravity data. In this area, the relation between "deeper" deformation of Paleozoic bedrock associated with the formation of the Reelfoot rift and NMSZ seismicity and "shallower" deformation of overlying sediments has remained elusive, but could be revealed using integrated P- and SH-wave reflection. Surface expressions of deformation are almost non-existent in this region, which makes seismic reflection surveying the only means of detecting structures that are possibly pertinent to seismic hazard assessment. Since P- and SH-waves respond differently to the rock and fluid properties and travel at dissimilar speeds, the resulting seismic profiles provide complementary views of the subsurface based on different levels of resolution and imaging capability. P-wave profiles acquired in southwestern Illinois and western Kentucky (USA) detect faulting of deep, Paleozoic bedrock and Cretaceous reflectors while coincident SH-wave surveys show that this deformation propagates higher into overlying Tertiary and Quaternary strata. Forward modeling of micro-gravity data acquired along one of the seismic profiles further supports an interpretation of faulting of bedrock and Cretaceous strata. The integration of the two seismic and the micro-gravity methods therefore increases the scope for investigating the relation between the older and younger deformation in an area of critical seismic hazard. ?? 2006 Elsevier B.V. All rights reserved.

  13. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  14. Seismo-volcano source localization with triaxial broad-band seismic array

    NASA Astrophysics Data System (ADS)

    Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.

    2011-10-01

    Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.

  15. Characteristics of Induced and Tectonic Seismicity in Oklahoma Based on High-precision Earthquake Relocations and Focal mechanisms

    NASA Astrophysics Data System (ADS)

    Aziz Zanjani, F.; Lin, G.

    2016-12-01

    Seismic activity in Oklahoma has greatly increased since 2013, when the number of wastewater disposal wells associated with oil and gas production was significantly increased in the area. An M5.8 earthquake at about 5 km depth struck near Pawnee, Oklahoma on September 3, 2016. This earthquake is postulated to be related with the anthropogenic activity in Oklahoma. In this study, we investigate the seismic characteristics in Oklahoma by using high-precision earthquake relocations and focal mechanisms. We acquire the seismic data between January 2013 and October 2016 recorded by the local and regional (within 200 km distance from the Pawnee mainshock) seismic stations from the Incorporated Research Institutions for Seismology (IRIS). We relocate all the earthquakes by applying the source-specific station term method and a differential time relocation method based on waveform cross-correlation data. The high-precision earthquake relocation catalog is then used to perform full-waveform modeling. We use Muller's reflection method for Green's function construction and the mtinvers program for moment tensor inversion. The sensitivity of the solution to the station and component distribution is evaluated by carrying out the Jackknife resampling. These earthquake relocation and focal mechanism results will help constrain the fault orientation and the earthquake rupture length. In order to examine the static Coulomb stress change due to the 2016 Pawnee earthquake, we utilize the Coulomb 3 software in the vicinity of the mainshock and compare the aftershock pattern with the calculated stress variation. The stress change in the study area can be translated into probability of seismic failure on other parts of the designated fault.

  16. Elastic Reverse Time Migration (RTM) From Surface Topography

    NASA Astrophysics Data System (ADS)

    Akram, Naveed; Chen, Xiaofei

    2017-04-01

    Seismic Migration is a promising data processing technique to construct subsurface images by projecting the recorded seismic data at surface back to their origins. There are numerous Migration methods. Among them, Reverse Time Migration (RTM) is considered a robust and standard imaging technology in present day exploration industry as well as in academic research field because of its superior performance compared to traditional migration methods. Although RTM is extensive computing and time consuming but it can efficiently handle the complex geology, highly dipping reflectors and strong lateral velocity variation all together. RTM takes data recorded at the surface as a boundary condition and propagates the data backwards in time until the imaging condition is met. It can use the same modeling algorithm that we use for forward modeling. The classical seismic exploration theory assumes flat surface which is almost impossible in practice for land data. So irregular surface topography has to be considered in simulation of seismic wave propagation, which is not always a straightforward undertaking. In this study, Curved grid finite difference method (CG-FDM) is adapted to model elastic seismic wave propagation to investigate the effect of surface topography on RTM results and explore its advantages and limitations with synthetic data experiments by using Foothill model with topography as the true model. We focus on elastic wave propagation rather than acoustic wave because earth actually behaves as an elastic body. Our results strongly emphasize on the fact that irregular surface topography must be considered for modeling of seismic wave propagation to get better subsurface images specially in mountainous scenario and suggest practitioners to properly handled the geometry of data acquired on irregular topographic surface in their imaging algorithms.

  17. Elastic Reverse Time Migration (RTM) From Surface Topography

    NASA Astrophysics Data System (ADS)

    Naveed, A.; Chen, X.

    2016-12-01

    Seismic Migration is a promising data processing technique to construct subsurface images by projecting the recorded seismic data at surface back to their origins. There are numerous Migration methods. Among them, Reverse Time Migration (RTM) is considered a robust and standard imaging technology in present day exploration industry as well as in academic research field because of its superior performance compared to traditional migration methods. Although RTM is extensive computing and time consuming but it can efficiently handle the complex geology, highly dipping reflectors and strong lateral velocity variation all together. RTM takes data recorded at the surface as a boundary condition and propagates the data backwards in time until the imaging condition is met. It can use the same modeling algorithm that we use for forward modeling. The classical seismic exploration theory assumes flat surface which is almost impossible in practice for land data. So irregular surface topography has to be considered in simulation of seismic wave propagation, which is not always a straightforward undertaking. In this study, Curved grid finite difference method (CG-FDM) is adapted to model elastic seismic wave propagation to investigate the effect of surface topography on RTM results and explore its advantages and limitations with synthetic data experiments by using Foothill model with topography as the true model. We focus on elastic wave propagation rather than acoustic wave because earth actually behaves as an elastic body. Our results strongly emphasize on the fact that irregular surface topography must be considered for modeling of seismic wave propagation to get better subsurface images specially in mountainous scenario and suggest practitioners to properly handled the geometry of data acquired on irregular topographic surface in their imaging algorithms.

  18. Seismoelectric Effects based on Spectral-Element Method for Subsurface Fluid Characterization

    NASA Astrophysics Data System (ADS)

    Morency, C.

    2017-12-01

    Present approaches for subsurface imaging rely predominantly on seismic techniques, which alone do not capture fluid properties and related mechanisms. On the other hand, electromagnetic (EM) measurements add constraints on the fluid phase through electrical conductivity and permeability, but EM signals alone do not offer information of the solid structural properties. In the recent years, there have been many efforts to combine both seismic and EM data for exploration geophysics. The most popular approach is based on joint inversion of seismic and EM data, as decoupled phenomena, missing out the coupled nature of seismic and EM phenomena such as seismoeletric effects. Seismoelectric effects are related to pore fluid movements with respect to the solid grains. By analyzing coupled poroelastic seismic and EM signals, one can capture a pore scale behavior and access both structural and fluid properties.Here, we model the seismoelectric response by solving the governing equations derived by Pride and Garambois (1994), which correspond to Biot's poroelastic wave equations and Maxwell's electromagnetic wave equations coupled electrokinetically. We will show that these coupled wave equations can be numerically implemented by taking advantage of viscoelastic-electromagnetic mathematical equivalences. These equations will be solved using a spectral-element method (SEM). The SEM, in contrast to finite-element methods (FEM) uses high degree Lagrange polynomials. Not only does this allow the technique to handle complex geometries similarly to FEM, but it also retains exponential convergence and accuracy due to the use of high degree polynomials. Finally, we will discuss how this is a first step toward full coupled seismic-EM inversion to improve subsurface fluid characterization. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. A seismic reflection velocity study of a Mississippian mud-mound in the Illinois basin

    NASA Astrophysics Data System (ADS)

    Ranaweera, Chamila Kumari

    Two mud-mounds have been reported in the Ullin limestone near, but not in, the Aden oil field in Hamilton County, Illinois. One mud-mound is in the Broughton oil field of Hamilton County 25 miles to the south of Aden. The second mud-mound is in the Johnsonville oil field in Wayne County 20 miles to the north of Aden. Seismic reflection profiles were shot in 2012 adjacent to the Aden oil field to evaluate the oil prospects and to investigate the possibility of detecting Mississippian mud-mounds near the Aden field. A feature on one of the seismic profiles was interpreted to be a mud-mound or carbonate buildup. A well drilled at the location of this interpreted structure provided digital geophysical logs and geological logs used to refine the interpretation of the seismic profiles. Geological data from the new well at Aden, in the form of drill cuttings, have been used to essentially confirm the existence of a mud-mound in the Ullin limestone at a depth of 4300 feet. Geophysical well logs from the new well near Aden were used to create 1-D computer models and synthetic seismograms for comparison to the seismic data. The reflection seismic method is widely used to aid interpreting subsurface geology. Processing seismic data is an important step in the method as a properly processed seismic section can give a better image of the subsurface geology whereas a poorly processed section could mislead the interpretation. Seismic reflections will be more accurately depicted with careful determination of seismic velocities and by carefully choosing the processing steps and parameters. Various data processing steps have been applied and parameters refined to produce improved stacked seismic records. The resulting seismic records from the Aden field area indicate a seismic response similar to what is expected from a carbonate mud-mound. One-dimensional synthetic seismograms were created using the available sonic and density logs from the well drilled near the Aden seismic lines. The 1-D synthetics were used by Cory Cantrell of Royal Drilling and Producing Company to identify various reflections on the seismic records. Seismic data was compared with the modeled synthetic seismograms to identify what appears to be a carbonate mud-mound within the Aden study area. No mud-mounds have been previously found in the Aden oil field. Average and interval velocities obtained from the geophysical logs from the wells drilled in the Aden area was compared with the same type of well velocities from the Broughton known mud-mound area to observe the significance of velocity variation related to the un-known mud-mound in the Aden study area. The results of the velocity study shows a similar trends in the wells from both areas and are higher at the bottom of the wells. Another approach was used to observe the variation of root mean square velocities calculated from the sonic log from the well velocity from the Aden area and the stacking velocities obtained from the seismic data adjacent to the well.

  20. Seismic and Gravity Data Help Constrain the Stratigraphic and Tectonic History of Offshore New Harbor, Ross Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Speece, M. A.; Pekar, S. F.; Wilson, G. S.; Sunwall, D. A.; Tinto, K. J.

    2010-12-01

    The ANDRILL (ANtarctic geological DRILLing) Program’s Offshore New Harbor (ONH) Project successfully conducted multi-channel seismic and gravity surveys in 2008 to investigate the stratigraphic and tectonic history of westernmost Southern McMurdo Sound, Ross Sea, Antarctica, during the Greenhouse World (Eocene) into the start of the Icehouse World (Oligocene). Approximately 48 km of multi-channel seismic reflection data were collected on a sea-ice platform east of New Harbor. The seismic survey used and improved upon methods employed successfully by ANDRILL’s surveys in Southern McMurdo Sound (2005) and in Mackay Sea Valley (2007). These methods include using an air gun and snow streamer of gimbaled geophones. Upgrades in the ONH project’s field equipment substantially increased the rate at which seismic data could be acquired in a sea-ice environment compared to all previous surveys. In addition to the seismic survey, gravity data were collected from the sea ice in New Harbor with the aim of defining basin structural controls. Both the seismic and gravity data indicate thick sediment accumulation above the hanging wall of a major range front fault. This clearly identified fault could be the postulated master fault of the Transantarctic Mountains. An approximately 5 km thick sequence of sediments is present east of the CIROS-1 drill hole. CIROS-1 was drilled adjacent to the range front fault and recovered 702 m of sediments that cross the Eocene/Oligocene boundary. The new geophysical data indicate that substantial sediment core below the Eocene/Oligocene boundary could be recovered to the east of CIROS-1 during future drilling. Inshore of the range front fault, the data show fault bounded half grabens with sediment fill thickening eastward against localized normal faults. Modeling of the gravity data, that extends farther inland than the seismic profiles, suggests that over 1 km of sediments could be present locally offshore Taylor Valley. Future drilling of offshore Taylor Valley could help to constrain the East Antarctic Ice Sheet’s contributions to glacial-interglacial cyclicity in southern McMurdo Sound as far back as the middle Miocene. Unfortunately, the 2008 ONH seismic profiles do not extend far enough up Taylor Valley or Ferrar Fjord to fully define drilling targets. As a result, valley parallel seismic profiles are proposed to extend our seismic interpretations inland and substantiate the gravity models.

  1. Slope Stability Analysis In Seismic Areas Of The Northern Apennines (Italy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo Presti, D.; Fontana, T.; Marchetti, D.

    2008-07-08

    Several research works have been published on the slope stability in the northern Tuscany (central Italy) and particularly in the seismic areas of Garfagnana and Lunigiana (Lucca and Massa-Carrara districts), aimed at analysing the slope stability under static and dynamic conditions and mapping the landslide hazard. In addition, in situ and laboratory investigations are available for the study area, thanks to the activities undertaken by the Tuscany Seismic Survey. Based on such a huge information the co-seismic stability of few ideal slope profiles have been analysed by means of Limit equilibrium method LEM - (pseudo-static) and Newmark sliding block analysismore » (pseudo-dynamic). The analysis--results gave indications about the most appropriate seismic coefficient to be used in pseudo-static analysis after establishing allowable permanent displacement. Such indications are commented in the light of the Italian and European prescriptions for seismic stability analysis with pseudo-static approach. The stability conditions, obtained from the previous analyses, could be used to define microzonation criteria for the study area.« less

  2. Online monitoring of seismic damage in water distribution systems

    NASA Astrophysics Data System (ADS)

    Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei

    2004-07-01

    It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.

  3. Combining mineral physics with seismic observations: What can we deduce about the thermochemical structure of the Earth's deep interior?

    NASA Astrophysics Data System (ADS)

    Cobden, L. J.

    2017-12-01

    Mineral physics provides the essential link between seismic observations of the Earth's interior, and laboratory (or computer-simulated) measurements of rock properties. In this presentation I will outline the procedure for quantitative conversion from thermochemical structure to seismic structure (and vice versa) using the latest datasets from seismology and mineralogy. I will show examples of how this method can allow us to infer major chemical and dynamic properties of the deep mantle. I will also indicate where uncertainties and limitations in the data require us to exercise caution, in order not to "over-interpret" seismic observations. Understanding and modelling these uncertainties serves as a useful guide for mineralogists to ascertain which mineral parameters are most useful in seismic interpretation, and enables seismologists to optimise their data assembly and inversions for quantitative interpretations.

  4. Detection and Identification of Small Seismic Events Following the 3 September 2017 UNT Around North Korean Nuclear Test Site

    NASA Astrophysics Data System (ADS)

    Kim, W. Y.; Richards, P. G.

    2017-12-01

    At least four small seismic events were detected around the North Korean nuclear test site following the 3 September 2017 underground nuclear test. The magnitude of these shocks range from 2.6 to 3.5. Based on their proximity to the September 3 UNT, these shocks may be considered as aftershocks of the UNT. We assess the best method to classify these small events based on spectral amplitude ratios of regional P and S wave from the shocks. None of these shocks are classified as explosion-like based on P/S spectral amplitude ratios. We examine additional possible small seismic events around the North Korean test site by using seismic data from stations in southern Korea and northeastern China including IMS seismic arrays, GSN stations, and regional network stations in the region.

  5. Relationship between seismic status of Earth and relative position of bodies in sun-earth-moon system

    NASA Astrophysics Data System (ADS)

    Kulanin, N. V.

    1985-03-01

    The time spectrum of variations in seismicity is quite broad. There are seismic seasons, as well as multiannual variations. The range of characteristic times of variation from days to about one year is studied. Seismic activity as a function of the position of the moon relative to the Earth and the direction toward the Sun is studied. The moments of strong earthquakes, over 5.8 on the Richter scale, between 1968 and June 1980 are plotted in time coordinates relating them to the relative positions of the three bodies in the sun-earth-moon system. Methods of mathematical statistics are applied to the points produced, indicating at least 99% probability that the distribution was not random. a periodicity of the earth's seismic state of 413 days is observed.

  6. Incorporating induced seismicity in the 2014 United States National Seismic Hazard Model: results of the 2014 workshop and sensitivity studies

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.

    2015-01-01

    The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after further consideration of the reliability and scientific acceptability of each alternative input model. Forecasting the seismic hazard from induced earthquakes is fundamentally different from forecasting the seismic hazard for natural, tectonic earthquakes. This is because the spatio-temporal patterns of induced earthquakes are reliant on economic forces and public policy decisions regarding extraction and injection of fluids. As such, the rates of induced earthquakes are inherently variable and nonstationary. Therefore, we only make maps based on an annual rate of exceedance rather than the 50-year rates calculated for previous U.S. Geological Survey hazard maps.

  7. Time-Independent Annual Seismic Rates, Based on Faults and Smoothed Seismicity, Computed for Seismic Hazard Assessment in Italy

    NASA Astrophysics Data System (ADS)

    Murru, M.; Falcone, G.; Taroni, M.; Console, R.

    2017-12-01

    In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.

  8. Prediction of subsurface fracture in mining zone of Papua using passive seismic tomography based on Fresnel zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiadi, Herlan; Nurhandoko, Bagus Endar B.; Wely, Woen

    Fracture prediction in a block cave of underground mine is very important to monitor the structure of the fracture that can be harmful to the mining activities. Many methods can be used to obtain such information, such as TDR (Time Domain Relectometry) and open hole. Both of them have limitations in range measurement. Passive seismic tomography is one of the subsurface imaging method. It has advantage in terms of measurements, cost, and rich of rock physical information. This passive seismic tomography studies using Fresnel zone to model the wavepath by using frequency parameter. Fresnel zone was developed by Nurhandoko inmore » 2000. The result of this study is tomography of P and S wave velocity which can predict position of fracture. The study also attempted to use sum of the wavefronts to obtain position and time of seismic event occurence. Fresnel zone tomography and the summation wavefront can predict location of geological structure of mine area as well.« less

  9. A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods

    NASA Astrophysics Data System (ADS)

    Jakubowski, Jacek

    2014-12-01

    The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.

  10. A modified symplectic PRK scheme for seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Ma, Jian

    2017-02-01

    A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.

  11. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    USGS Publications Warehouse

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  12. Cavity Detection and Delineation Research. Report 2. Seismic Methodology: Medford Cave Site, Florida.

    DTIC Science & Technology

    1983-06-01

    energy. A distance of 50 ft was maintained between source and detector for one test and 25 ft for the other tests. Since the seismic unit was capable...during the tests. After a recording was made, the seismic source and geophone were each moved 5 ft, thus maintaining the 50- or 25-ft source-to- detector ...produced by cavities; therefore, detection using this technique was not achieved. The sensitivity of the uphole refraction method to the presence of

  13. Reply to “Comment on ‘Near-surface location, geometry, and velocities of the Santa Monica fault zone, Los Angeles, California’ by R. D. Catchings, G. Gandhok, M. R. Goldman, D. Okaya, M. J. Rymer, and G. W. Bawden” by T. L. Pratt and J. F. Dolan

    USGS Publications Warehouse

    Catchings, Rufus D.; Rymer, Michael J.; Goldman, Mark R.; Bawden, Gerald W.

    2010-01-01

    In a comment on our 2008 paper (Catchings, Gandhok, et al., 2008) on the Santa Monica fault in Los Angeles, California, Pratt and Dolan (2010) (herein referred to as P&D) cite numerous objections to our work, inferring that our study is flawed. However, as shown in our reply, their objections contradict their own published works, published works of others, and proven seismic methodologies. Rather than responding to each repeated invalid objection, we address their objections by topic in the subsequent sections.In Catchings, Gandhok, et al. (2008), we presented high-resolution seismic-reflection images that showed two near-surface faults in the upper 50 m beneath the grounds of the Wadsworth Veterans Administration Hospital (WVAH). Although P&D suggest we effectively duplicated their seismic acquisition, our survey was not a duplication of their efforts. Rather, we conducted a seismic-imaging survey over a similar profile as Pratt et al. (1998) but used a different data acquisition system and different data processing methods to evaluate methods of seismically imaging blind faults in the wake of the 17 January 1994 M 6.7 Northridge earthquake. We used an acquisition method that provides both tomographic seismic velocities and reflection images. Our combined-data approach allowed for shallower imaging (∼2.5 m minimum) than the ∼20-m minimum of Pratt et al. (1998), clearer images of the fault zone, and more accurate depth determinations (rather than time images). In processing the reflection images, we used prestack depth migration, which is generally accepted as the only proper imaging method for imaging subsurface structures with strong lateral velocity variations (Versteeg, 1993), a condition shown to exist at the WVAH site. We correlated our reflection images with refraction tomography images, borehole lithology, and velocity data, Interferometric Synthetic Aperture Radar images, and changes in groundwater depths. Except for some minor differences, our seismic-reflection images coincide with previously published seismic-reflection images by Dolan and Pratt (1997) and Pratt et al. (1998), and a paleoseismic study by Dolan et al. (2000). Principal differences among our interpretations and those of Pratt et al. (1998) relate to the upper 20 m and the south side of the fault, which Pratt et al. (1998) did not clearly image. In contrast, our seismic images included structures on both sides of the fault zone from about 2.5 m depth to about 100 m depth at WVAH, allowing us to interpret more details.

  14. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  15. Seismic risk assessment for road in Indonesia

    NASA Astrophysics Data System (ADS)

    Toyfur, Mona Foralisa; Pribadi, Krishna S.

    2016-05-01

    Road networks in Indonesia consist of 446,000 km of national, provincial and local roads as well as toll highways. Indonesia is one of countries that exposed to various natural hazards, such as earthquakes, floods, landslides, etc. Within the Indonesian archipelago, several global tectonic plates interact, such as the Indo-Australian, Pacific, Eurasian, resulting in a complex geological setting, characterized by the existence of seismically active faults and subduction zones and a chain of more than one hundred active volcanoes. Roads in Indonesia are vital infrastructure needed for people and goods movement, thus supporting community life and economic activities, including promoting regional economic development. Road damages and losses due to earthquakes have not been studied widely, whereas road disruption caused enormous economic damage. The aim of this research is to develop a method to analyse risk caused by seismic hazard to roads. The seismic risk level of road segment is defined using an earthquake risk index, adopting the method of Earthquake Disaster Risk Index model developed by Davidson (1997). Using this method, road segments' risk level can be defined and compared, and road risk map can be developed as a tool for prioritizing risk mitigation programs for road networks in Indonesia.

  16. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  17. Application of Visual Attention in Seismic Attribute Analysis

    NASA Astrophysics Data System (ADS)

    He, M.; Gu, H.; Wang, F.

    2016-12-01

    It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.

  18. Refinements to the method of epicentral location based on surface waves from ambient seismic noise: introducing Love waves

    USGS Publications Warehouse

    Levshin, Anatoli L.; Barmin, Mikhail P.; Moschetti, Morgan P.; Mendoza, Carlos; Ritzwoller, Michael H.

    2012-01-01

    The purpose of this study is to develop and test a modification to a previous method of regional seismic event location based on Empirical Green’s Functions (EGFs) produced from ambient seismic noise. Elastic EGFs between pairs of seismic stations are determined by cross-correlating long ambient noise time-series recorded at the two stations. The EGFs principally contain Rayleigh- and Love-wave energy on the vertical and transverse components, respectively, and we utilize these signals between about 5 and 12 s period. The previous method, based exclusively on Rayleigh waves, may yield biased epicentral locations for certain event types with hypocentral depths between 2 and 5 km. Here we present theoretical arguments that show how Love waves can be introduced to reduce or potentially eliminate the bias. We also present applications of Rayleigh- and Love-wave EGFs to locate 10 reference events in the western United States. The separate Rayleigh and Love epicentral locations and the joint locations using a combination of the two waves agree to within 1 km distance, on average, but confidence ellipses are smallest when both types of waves are used.

  19. Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes

    NASA Astrophysics Data System (ADS)

    Morozov, Yu. V.; Spektor, A. A.

    2017-11-01

    A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.

  20. Optical seismic sensor systems and methods

    DOEpatents

    Beal, A. Craig; Cummings, Malcolm E.; Zavriyev, Anton; Christensen, Caleb A.; Lee, Keun

    2015-12-08

    Disclosed is an optical seismic sensor system for measuring seismic events in a geological formation, including a surface unit for generating and processing an optical signal, and a sensor device optically connected to the surface unit for receiving the optical signal over an optical conduit. The sensor device includes at least one sensor head for sensing a seismic disturbance from at least one direction during a deployment of the sensor device within a borehole of the geological formation. The sensor head includes a frame and a reference mass attached to the frame via at least one flexure, such that movement of the reference mass relative to the frame is constrained to a single predetermined path.

  1. Refining locations of the 2005 Mukacheve, West Ukraine, earthquakes based on similarity of their waveforms

    NASA Astrophysics Data System (ADS)

    Gnyp, Andriy

    2009-06-01

    Based on the results of application of correlation analysis to records of the 2005 Mukacheve group of recurrent events and their subsequent relocation relative to the reference event of 7 July 2005, a conclusion has been drawn that all the events had most likely occurred on the same rup-ture plane. Station terms have been estimated for seismic stations of the Transcarpathians, accounting for variation of seismic velocities beneath their locations as compared to the travel time tables used in the study. In methodical aspect, potentials and usefulness of correlation analysis of seismic records for a more detailed study of seismic processes, tectonics and geodynamics of the Carpathian region have been demonstrated.

  2. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    PubMed

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  3. Seismic Sources for the Territory of Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, N. S.; Varazanashvili, O.

    2011-12-01

    The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes

  4. Seismic modeling of complex stratified reservoirs

    NASA Astrophysics Data System (ADS)

    Lai, Hung-Liang

    Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex vertical and lateral heterogeneity that is difficult to measure directly, stochastic modeling is often used to predict the range of possible seismic responses. Though binary models containing mixtures of sands and shales have been proposed in previous work, log measurements show that these are not good representations of real seismic properties. Therefore, I develop a new approach for generating stochastic turbidite models (STM) from a combination of geological interpretation and well log measurements that are more realistic. Calculations of the composite reflection coefficient and synthetic seismograms predict direct hydrocarbon indicators associated with such turbidite sequences. The STMs provide important insights to predict the seismic responses for the complexity of turbidite reservoirs. Results of AVO responses predict the presence of gas saturation in the sand beds. For example, as the source frequency increases, the uncertainty in AVO responses for brine and gas sands predict the possibility of false interpretation in AVO analysis.

  5. The Shock and Vibration Digest. Volume 14, Number 11

    DTIC Science & Technology

    1982-11-01

    cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems

  6. Improving Magnitude Detection Thresholds Using Multi-Station Multi-Event, and Multi-Phase Methods

    DTIC Science & Technology

    2008-07-31

    applied to different tectonic settings and for what percentage of the seismicity. 111 million correlations were performed on Lg-waves for the events in...x xi Acknowledgments We’d like to thank the operators of the Chinese Digital Seismograph Network, the U.S. Geological Survey, and...applicable correlation methods can be applied to different tectonic settings and for what percentage of the seismicity. 111 million correlations were

  7. The integration of elastic wave properties and machine learning for the distribution of petrophysical properties in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Ratnam, T. C.; Ghosh, D. P.; Negash, B. M.

    2018-05-01

    Conventional reservoir modeling employs variograms to predict the spatial distribution of petrophysical properties. This study aims to improve property distribution by incorporating elastic wave properties. In this study, elastic wave properties obtained from seismic inversion are used as input for an artificial neural network to predict neutron porosity in between well locations. The method employed in this study is supervised learning based on available well logs. This method converts every seismic trace into a pseudo-well log, hence reducing the uncertainty between well locations. By incorporating the seismic response, the reliance on geostatistical methods such as variograms for the distribution of petrophysical properties is reduced drastically. The results of the artificial neural network show good correlation with the neutron porosity log which gives confidence for spatial prediction in areas where well logs are not available.

  8. Shallow Reflection Method for Water-Filled Void Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Hazreek, Z. A. M.; Mohammad, A. H.; Izzaty, R. A.

    2018-04-01

    Shallow investigation is crucial in enhancing the characteristics of subsurface void commonly encountered in civil engineering, and one such technique commonly used is seismic-reflection technique. An assessment of the effectiveness of such an approach is critical to determine whether the quality of the works meets the prescribed requirements. Conventional quality testing suffers limitations including: limited coverage (both area and depth) and problems with resolution quality. Traditionally quality assurance measurements use laboratory and in-situ invasive and destructive tests. However geophysical approaches, which are typically non-invasive and non-destructive, offer a method by which improvement of detection can be measured in a cost-effective way. Of this seismic reflection have proved useful to assess void characteristic, this paper evaluates the application of shallow seismic-reflection method in characterizing the water-filled void properties at 0.34 m depth, specifically for detection and characterization of void measurement using 2-dimensional tomography.

  9. Noise reduction in long‐period seismograms by way of array summing

    USGS Publications Warehouse

    Ringler, Adam; Wilson, David; Storm, Tyler; Marshall, Benjamin T.; Hutt, Charles R.; Holland, Austin

    2016-01-01

    Long‐period (>100  s period) seismic data can often be dominated by instrumental noise as well as local site noise. When multiple collocated sensors are installed at a single site, it is possible to improve the overall station noise levels by applying stacking methods to their traces. We look at the noise reduction in long‐period seismic data by applying the time–frequency phase‐weighted stacking method of Schimmel and Gallart (2007) as well as the phase‐weighted stacking (PWS) method of Schimmel and Paulssen (1997) to four collocated broadband sensors installed in the quiet Albuquerque Seismological Laboratory underground vault. We show that such stacking methods can improve vertical noise levels by as much as 10 dB over the mean background noise levels at 400 s period, suggesting that greater improvements could be achieved with an array involving multiple sensors. We also apply this method to reduce local incoherent noise on horizontal seismic records of the 2 March 2016 Mw 7.8 Sumatra earthquake, where the incoherent noise levels at very long periods are similar in amplitude to the earthquake signal. To maximize the coherency, we apply the PWS method to horizontal data where relative azimuths between collocated sensors are estimated and compared with a simpler linear stack with no azimuthal rotation. Such methods could help reduce noise levels at various seismic stations where multiple high‐quality sensors have been deployed. Such small arrays may also provide a solution to improving long‐period noise levels at Global Seismographic Network stations.

  10. Co-seismic landslide topographic analysis based on multi-temporal DEM-A case study of the Wenchuan earthquake.

    PubMed

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2013-01-01

    Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively.

  11. Detecting Seismic Events Using a Supervised Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Burks, L.; Forrest, R.; Ray, J.; Young, C.

    2017-12-01

    We explore the use of supervised hidden Markov models (HMMs) to detect seismic events in streaming seismogram data. Current methods for seismic event detection include simple triggering algorithms, such as STA/LTA and the Z-statistic, which can lead to large numbers of false positives that must be investigated by an analyst. The hypothesis of this study is that more advanced detection methods, such as HMMs, may decreases false positives while maintaining accuracy similar to current methods. We train a binary HMM classifier using 2 weeks of 3-component waveform data from the International Monitoring System (IMS) that was carefully reviewed by an expert analyst to pick all seismic events. Using an ensemble of simple and discrete features, such as the triggering of STA/LTA, the HMM predicts the time at which transition occurs from noise to signal. Compared to the STA/LTA detection algorithm, the HMM detects more true events, but the false positive rate remains unacceptably high. Future work to potentially decrease the false positive rate may include using continuous features, a Gaussian HMM, and multi-class HMMs to distinguish between types of seismic waves (e.g., P-waves and S-waves). Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.SAND No: SAND2017-8154 A

  12. Blocky inversion of multichannel elastic impedance for elastic parameters

    NASA Astrophysics Data System (ADS)

    Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza

    2018-04-01

    Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.

  13. Seismic Performance of Columns with Grouted Couplers in Idaho Accelerated Bridge Construction Applications

    DOT National Transportation Integrated Search

    2016-10-16

    n Accelerated Bridge Construction (ABC) methods, one way to connect prefabricated columns is by using grouted steel bar couplers. As of October 2016, in the U.S., only Utah DOT allows the use of grouted couplers in plastic hinge locations in seismic ...

  14. 76 FR 20325 - Takes of Marine Mammals Incidental to Specified Activities; Marine Geophysical Survey in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... stock(s) for subsistence uses (where relevant). The authorization must set forth the permissible methods..., with research funding from the U.S. National Science Foundation (NSF), plans to conduct the seismic... Seismic Research Funded by the [[Page 20327

  15. Windrum: a program for monitoring seismic signals in real time

    NASA Astrophysics Data System (ADS)

    Giudicepietro, Flora

    2017-04-01

    Windrum is a program devote to monitor seismic signals arriving from remote stations in real time. Since 2000, the Osservatorio Vesuviano (INGV) uses the first version of Windrum to monitor the seismic activity of Mt. Vesuvius, Campi Flegrei, Ischia and Stromboli volcano. The program has been also used at the Observatory of Bukittinggi (Indonesia), at the offices of the Italian National Civil Protection, at the COA in Stromboli and at the Civil Protection Center of the municipality of Pozzuoli (Napoli, Italy). In addition, the Osservatorio Vesuviano regularly uses Windrum in educational events such as the Festival of Science in Genova (Italy), FuturoRemoto and other events organized by Città della Scienza in Naples (Italy). The program displays the seismic trace of one station on a monitor, using short packet of data (typically 1 or 2 seconds) received through UTC Internet protocol. The data packets are in Trace_buffer format, a native protocol of Earthworm seismic system that is widely used for the data transmission on Internet. Windrum allows the user to visualize 24 hours of signals, to zoom selected windows of data, in order to estimate the duration Magnitude (Md) of an earthquake, in an intercative way, and to generate graphic images for the web. Moreover, Windrum can exchange Internet messages with other copies of the same program to synchronize actions, such as to zoom the same window of data or mark the beginning of an earthquake on all active monitors simultaneously. Originally, in 2000, Windrum was developed in VB6. I have now developed a new version in VB.net, which goes beyond the obsolescence problems that were appearing. The new version supports the decoding of binary packets received by soket in a more flexible way, allowing the generation of graphic images in different formats. In addition, the new version allows a more flexible layout configuration, suitable for use on large screens with high resolution. Over the past 17 years the use of Windrum for visual analysis of the seismic signals of Vesuvius, Campi Flegrei, Ischia and Stromboli has reduced the detection threshold of the events, allowing a detailed analysis of the seismogram in near real time.

  16. The assessment of seismic hazard for Gori, (Georgia) and preliminary studies of seismic microzonation

    NASA Astrophysics Data System (ADS)

    Gogoladze, Z.; Moscatelli, M.; Giallini, S.; Avalle, A.; Gventsadze, A.; Kvavadze, N.; Tsereteli, N.

    2016-12-01

    Seismic risk is a crucial issue for South Caucasus, which is the main gateway between Asia and Europe. The goal of this work is to propose new methods and criteria for defining an overall approach aimed at assessing and mitigating seismic risk in Georgia. In this reguard seismic microzonation represents a highly useful tool for seismic risk assessmentin land management, for design of buildings or structures and for emergency planning.Seismic microzonation assessment of local seismic hazard,which is a component of seismicity resulting from specific local characteristics which cause local amplification and soil instability, through identification of zones with seismically homogeneous behavior. This paper presents the results of preliminary study of seismic microzonation of Gori, Georgia. Gori is and is located in the Shida Kartli region and on both sides of Liachvi and Mtkvari rivers, with area of about 135 km2around the Gori fortress. Gori is located in Achara-Trialeti fold-thrust belt, that is tectonically unstable. Half of all earthquakes in Gori area with magnitude M≥3.5 have happened along this fault zone and on basis of damage caused by previous earthquakes, this territory show the highest level of risk (the maximum value of direct losses) in central part of the town. The seismic microzonation map of level 1 for Gori was carried out using: 1) Already available data (i.e., topographic map and boreholes data), 2) Results of new geological surveys and 3) Geophysical measurements (i.e., MASW and noise measurements processed with HVSR technique). Our preliminary results highlight the presence of both stable zones susceptible to local amplifications and unstable zones susceptible to geological instability. Our results are directed to establish set of actions aimed at risk mitigation before initial onset of emergency, and to management of the emergency once the seismic event has occurred. The products obtained, will contain the basic elements of an integrated system aimed at reducing risk and improving over all safety of people and infrastructure in Georgia.

  17. Excitation of seismic waves by a tornado

    NASA Astrophysics Data System (ADS)

    Valovcin, A.; Tanimoto, T.; Twardzik, C.

    2016-12-01

    Tornadoes are among the most common natural disasters to occur in the United States. Various methods are currently used in tornado forecasting, including surface weather stations, weather balloons and satellite and Doppler radar. These methods work for detecting possible locations of tornadoes and funnel clouds, but knowing when a tornado has touched down still strongly relies on reports from spotters. Studying tornadoes seismically offers an opportunity to know when a tornado has touched down without requiring an eyewitness report. With the installation of Earthscope's Transportable Array (TA), there have been an increased number of tornadoes that have come within close range of seismometers. We have identified seismic signals corresponding to three tornadoes that occurred in 2011 in the central US. These signals were recorded by the TA station closest to each of the tornado tracks. For each tornado, the amplitudes of the seismic signals increase when the storm is in contact with the ground, and continue until the tornado lifts off some time later. This occurs at both high and low frequencies. In this study we will model the seismic signal generated by a tornado at low frequencies (below 0.1 Hz). We will begin by modeling the signal from the Joplin tornado, an EF5 rated tornado which occurred in Missouri on May 22, 2011. By approximating the tornado as a vertical force, we model the generated signal as the tornado moves along its track and changes in strength. By modeling the seismic waveform generated by a tornado, we can better understand the seismic-excitation process. It could also provide a way to quantitatively compare tornadoes. Additional tornadoes to model include the Calumet-El Reno-Piedmont-Guthrie (CEPG) and Chickasa-Blanchard-Newcastle (CBN) tornadoes, both of which occurred on May 24, 2011 in Oklahoma.

  18. Improvement forecasting of volcanic activity by applying a Kalman filter to the SSEM signal. The case of the El Hierro Island eruption (October 2011)

    NASA Astrophysics Data System (ADS)

    Garcia, A.; Berrocoso, M.; Marrero, J. M.; Ortiz, R.

    2012-04-01

    The FFM (Failure Forecast Method) is developed from the eruption of St. Helens, being repeatedly applied to forecast eruptions and recently to the prediction of seismic activity in active volcanic areas. The underwater eruption of El Hierro Island has been monitored from three months before starting (October 10, 2011). This allowed a large catalogue of seismic events (over 11000) and continuous recording seismic signals that cover the entire period. Since the beginning of the seismic-volcanic crisis (July 2011), the FFM was applied to the SSEM signal of seismic records. Mainly because El Hierro is a very small island, the SSEM has a high noise (traffic and oceanic noise). To improve the signal / noise ratio has been used a Kalman filter. The Kalman filter coefficients are adjusted using an inversion process based on forecasting errors occurred in the twenty days preceding. The application of this filter has been a significant improvement in the reliability of forecasts. The analysis of the results shows, before the start of the eruption, that 90% of the forecasts are obtained with errors less than 10 minutes with more than 24 hours in advance. It is noteworthy that the method predicts the events of greater magnitude and especially the beginning of each swarm of seismic events. At the time the eruption starts reducing the efficiency of the forecast 50% with a dispersion of more than one hour. This fact is probably due to decreased detectability by saturation of some of the seismic stations and decreased the average magnitude. However, the events of magnitude greater than 4 were predicted with an error less than 20 minutes.

  19. Assessing Gas-Hydrate Prospects on the North Slope of Alaska - Theoretical Considerations

    USGS Publications Warehouse

    Lee, Myung W.; Collett, Timothy S.; Agena, Warren F.

    2008-01-01

    Gas-hydrate resource assessment on the Alaska North Slope using 3-D and 2-D seismic data involved six important steps: (1) determining the top and base of the gas-hydrate stability zone, (2) 'tying' well log information to seismic data through synthetic seismograms, (3) differentiating ice from gas hydrate in the permafrost interval, (4) developing an acoustic model for the reservoir and seal, (5) developing a method to estimate gas-hydrate saturation and thickness from seismic attributes, and (6) assessing the potential gas-hydrate prospects from seismic data based on potential migration pathways, source, reservoir quality, and other relevant geological information. This report describes the first five steps in detail using well logs and provides theoretical backgrounds for resource assessments carried out by the U.S. Geological Survey. Measured and predicted P-wave velocities enabled us to tie synthetic seismograms to the seismic data. The calculated gas-hydrate stability zone from subsurface wellbore temperature data enabled us to focus our effort on the most promising depth intervals in the seismic data. A typical reservoir in this area is characterized by the P-wave velocity of 1.88 km/s, porosity of 42 percent, and clay volume content of 5 percent, whereas seal sediments encasing the reservoir are characterized by the P-wave velocity of 2.2 km/s, porosity of 32 percent, and clay volume content of 20 percent. Because the impedance of a reservoir without gas hydrate is less than that of the seal, a complex amplitude variation with respect to gas-hydrate saturation is predicted, namely polarity change, amplitude blanking, and high seismic amplitude (a bright spot). This amplitude variation with gas-hydrate saturation is the physical basis for the method used to quantify the resource potential of gas hydrates in this assessment.

  20. Measuring the seismic velocity in the top 15 km of Earth's inner core

    NASA Astrophysics Data System (ADS)

    Godwin, Harriet; Waszek, Lauren; Deuss, Arwen

    2018-01-01

    We present seismic observations of the uppermost layer of the inner core. This was formed most recently, thus its seismic features are related to current solidification processes. Previous studies have only constrained the east-west hemispherical seismic velocity structure in the Earth's inner core at depths greater than 15 km below the inner core boundary. The properties of shallower structure have not yet been determined, because the seismic waves PKIKP and PKiKP used for differential travel time analysis arrive close together and start to interfere. Here, we present a method to make differential travel time measurements for waves that turn in the top 15 km of the inner core, and measure the corresponding seismic velocity anomalies. We achieve this by generating synthetic seismograms to model the overlapping signals of the inner core phase PKIKP and the inner core boundary phase PKiKP. We then use a waveform comparison to attribute different parts of the signal to each phase. By measuring the same parts of the signal in both observed and synthetic data, we are able to calculate differential travel time residuals. We apply our method to data with ray paths which traverse the Pacific hemisphere boundary. We generate a velocity model for this region, finding lower velocity for deeper, more easterly ray paths. Forward modelling suggests that this region contains either a high velocity upper layer, or variation in the location of the hemisphere boundary with depth and/or latitude. Our study presents the first direct seismic observation of the uppermost 15 km of the inner core, opening new possibilities for further investigating the inner core boundary region.

  1. Seismogenic zones and attenuation laws for probabilistic seismic hazard assessment in low deformation area =

    NASA Astrophysics Data System (ADS)

    Le Goff, Boris

    Seismic Hazard Analysis (PSHA), rather than the subjective methodologies that are currently used. This study focuses particularly in the definition of the seismic sources, through the seismotectonic zoning, and the determination of historical earthquake location. An important step in the Probabilistic Seismic Hazard Analysis consists in defining the seismic source model. Such a model expresses the association of the seismicity characteristics with the tectonically-active geological structures evidenced by seismotectonic studies. Given that most of the faults, in low seismic regions, are not characterized well enough, the source models are generally defined as areal zones, delimited with finite boundary polygons, within which the seismicity and the geological features are deemed homogeneous (e.g., focal depth, seismicity rate). Besides the lack of data (short period of instrumental seismicity), such a method generates different problems for regions with low seismic activity: 1) a large sensitivity of resulting hazard maps to the location of zone boundaries, while these boundaries are set by expert decisions; 2) the zoning cannot represent any variability or structural complexity in seismic parameters; 3) the seismicity rate is distributed throughout the zone and the location of the determinant information used for its calculation is lost. We investigate an alternative approach to model the seismotectonic zoning, with three main objectives: 1) obtaining a reproducible method that 2) preserves the information on the sources and extent of the uncertainties, so as to allow to propagate them (through Ground Motion Prediction Equations on to the hazard maps), and that 3) redefines the seismic source concept to debrief our knowledge on the seismogenic structures and the clustering. To do so, the Bayesian methods are favored. First, a generative model with two zones, differentiated by two different surface activity rates, was developed, creating synthetic catalogs drawn from a Poisson distribution as occurrence model, a truncated Gutenberg-Richter law as magnitudefrequency relationship and a uniform spatial distribution. The inference of this model permits to assess the minimum number of data, nmin, required in an earthquake catalog to recover the activity rates of both zones and the limit between them, with some level of accuracy. In this Bayesian model, the earthquake locations are essential. Consequently, these data have to be obtained with the best accuracy possible. The main difficulty is to reduce the location uncertainty of historical earthquakes. We propose to use the method of Bakun and Wentworth (1997) to reestimate the epicentral region of these events. This method uses directly the intensity data points rather than the isoseismal lines, set up by experts. The significant advantage in directly using individual intensity observations is that the procedures are explicit and hence the results are reproducible. The results of such a method provide an estimation of the epicentral region with levels of confidence appropriated for the number of intensity data points used. As example, we applied this methodology to the 1909 Benavente event, because of its controversial location and the particularly shape of its isoseismal lines. A new location of the 1909 Benavente event is presented in this study and the epicentral region of this event is expressed with confidence levels related to the number of intensity data points. This epicentral region is improved by the development of a new intensity-distance attenuation law, appropriate for the Portugal mainland. This law is the first one in Portugal mainland developed as a function of the magnitude (M) rather than the subjective epicentral intensity. From the logarithmic regression of each event, we define the equation form of the attenuation law. We obtained the following attenuation law: I= -1.9438 ln(D)+4.1Mw-9.5763 for 4.4 ≤ Mw ≤ 6.2 Using these attenuation laws, we reached to a magnitude estimation of the 1909 Benavente event that is in good agreement with the instrumental one. The epicentral region estimation was also improved with a tightening of the confidence level contours and a minimum of rms[MI] coming closer to the epicenter estimation of Karnik (1969). Finally, this two zone model will be a reference in the comparison with other models, which will incorporate other available data. Nevertheless, future improvements are needed to obtain a seismotectonic zoning. We emphasize that such an approach is reproducible once priors and data sets are chosen. Indeed, the objective is to incorporate expert opinions as priors, and avoid using expert decisions. Instead, the products will be directly the result of the inference, when only one model is considered, or the result of a combination of models in the Bayesian sense.

  2. Seismic hazards in Thailand: a compilation and updated probabilistic analysis

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi; Charusiri, Punya

    2016-06-01

    A probabilistic seismic hazard analysis (PSHA) for Thailand was performed and compared to those of previous works. This PSHA was based upon (1) the most up-to-date paleoseismological data (slip rates), (2) the seismic source zones, (3) the seismicity parameters ( a and b values), and (4) the strong ground-motion attenuation models suggested as being suitable models for Thailand. For the PSHA mapping, both the ground shaking and probability of exceedance (POE) were analyzed and mapped using various methods of presentation. In addition, site-specific PSHAs were demonstrated for ten major provinces within Thailand. For instance, a 2 and 10 % POE in the next 50 years of a 0.1-0.4 g and 0.1-0.2 g ground shaking, respectively, was found for western Thailand, defining this area as the most earthquake-prone region evaluated in Thailand. In a comparison between the ten selected specific provinces within Thailand, the Kanchanaburi and Tak provinces had comparatively high seismic hazards, and therefore, effective mitigation plans for these areas should be made. Although Bangkok was defined as being within a low seismic hazard in this PSHA, a further study of seismic wave amplification due to the soft soil beneath Bangkok is required.

  3. Seismic Structure of Perth Basin (Australia) and surroundings from Passive Seismic Deployments

    NASA Astrophysics Data System (ADS)

    Issa, N.; Saygin, E.; Lumley, D. E.; Hoskin, T. E.

    2016-12-01

    We image the subsurface structure of Perth Basin, Western Australia and surroundings by using ambient seismic noise data from 14 seismic stations recently deployed by University of Western Australia (UWA) and other available permanent stations from Geoscience Australia seismic network and the Australian Seismometers in Schools program. Each of these 14 UWA seismic stations comprises a broadband sensor and a high fidelity 3-component 10 Hz geophone, recording in tandem at 250 Hz and 1000 Hz. The other stations used in this study are equipped with short period and broadband sensors. In addition, one shallow borehole station is operated with eight 3 component geophones at depths of between 2 and 44 m. The network is deployed to characterize natural seismicity in the basin and to try and identify any microseismic activity across Darling Fault Zone (DFZ), bounding the basin to the east. The DFZ stretches to approximately 1000 km north-south in Western Australia, and is one of the longest fault zones on the earth with a limited number of detected earthquakes. We use seismic noise cross- and auto-correlation methods to map seismic velocity perturbations across the basin and the transition from DFZ to the basin. Retrieved Green's functions are stable and show clear dispersed waveforms. Travel times of the surface wave Green's functions from noise cross-correlations are inverted with a two-step probabilistic framework to map the absolute shear wave velocities as a function of depth. The single station auto-correlations from the seismic noise yields P wave reflectivity under each station, marking the major discontinuities. Resulting images show the shear velocity perturbations across the region. We also quantify the variation of ambient seismic noise at different depths in the near surface using the geophones in the shallow borehole array.

  4. Ultrasonic laboratory measurements of the seismic velocity changes due to CO2 injection

    NASA Astrophysics Data System (ADS)

    Park, K. G.; Choi, H.; Park, Y. C.; Hwang, S.

    2009-04-01

    Monitoring the behavior and movement of carbon dioxide (CO2) in the subsurface is a quite important in sequestration of CO2 in geological formation because such information provides a basis for demonstrating the safety of CO2 sequestration. Recent several applications in many commercial and pilot scale projects and researches show that 4D surface or borehole seismic methods are among the most promising techniques for this purpose. However, such information interpreted from the seismic velocity changes can be quite subjective and qualitative without petrophysical characterization for the effect of CO2 saturation on the seismic changes since seismic wave velocity depends on various factors and parameters like mineralogical composition, hydrogeological factors, in-situ conditions. In this respect, we have developed an ultrasonic laboratory measurement system and have carried out measurements for a porous sandstone sample to characterize the effects of CO2 injection to seismic velocity and amplitude. Measurements are done by ultrasonic piezoelectric transducer mounted on both ends of cylindrical core sample under various pressure, temperature, and saturation conditions. According to our fundamental experiments, injected CO2 introduces the decrease of seismic velocity and amplitude. We identified that the velocity decreases about 6% or more until fully saturated by CO2, but the attenuation of seismic amplitude is more drastically than the velocity decrease. We also identified that Vs/Vp or elastic modulus is more sensitive to CO2 saturation. We note that this means seismic amplitude and elastic modulus change can be an alternative target anomaly of seismic techniques in CO2 sequestration monitoring. Thus, we expect that we can estimate more quantitative petrophysical relationships between the changes of seismic attributes and CO2 concentration, which can provide basic relation for the quantitative assessment of CO2 sequestration by further researches.

  5. Levee evaluation using MASW: Preliminary findings from the Citrus Lakefront Levee, New Orleans, Louisiana

    USGS Publications Warehouse

    Lane, John W.; Ivanov, Julian M.; Day-Lewis, Frederick D.; Clemens, Drew; Patev, Robert; Miller, Richard D.

    2008-01-01

    The utility of the multi‐channel analysis of surface waves (MASW) seismic method for non‐invasive assessment of earthen levees was evaluated for a section of the Citrus Lakefront Levee, New Orleans, Louisiana. This test was conducted after the New Orleans' area levee system had been stressed by Hurricane Katrina in 2005. The MASW data were acquired in a seismically noisy, urban environment using an accelerated weight‐drop seismic source and a towed seismic land streamer. Much of the seismic data were contaminated with higher‐order mode guided‐waves, requiring application of muting filtering techniques to improve interpretability of the dispersion curves. Comparison of shear‐wave velocity sections with boring logs suggests the existence of four distinct horizontal layers within and beneath the levee: (1) the levee core, (2) the levee basal layer of fat clay, (3) a sublevel layer of silty sand, and (4) underlying Pleistocene deposits of sandy lean clay. Along the surveyed section of levee, lateral variations in shear‐wave velocity are interpreted as changes in material rigidity, suggestive of construction or geologic heterogeneity, or possibly, that dynamic processes (such as differential settlement) are affecting discrete levee areas. The results of this study suggest that the MASW method is a geophysical tool with significant potential for non‐invasive characterization of vertical and horizontal variations in levee material shear strength. Additional work, however, is needed to fully understand and address the complex seismic wave propagation in levee structures.

  6. Time-dependent seismic tomography

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    2010-01-01

    Of methods for measuring temporal changes in seismic-wave speeds in the Earth, seismic tomography is among those that offer the highest spatial resolution. 3-D tomographic methods are commonly applied in this context by inverting seismic wave arrival time data sets from different epochs independently and assuming that differences in the derived structures represent real temporal variations. This assumption is dangerous because the results of independent inversions would differ even if the structure in the Earth did not change, due to observational errors and differences in the seismic ray distributions. The latter effect may be especially severe when data sets include earthquake swarms or aftershock sequences, and may produce the appearance of correlation between structural changes and seismicity when the wave speeds are actually temporally invariant. A better approach, which makes it possible to assess what changes are truly required by the data, is to invert multiple data sets simultaneously, minimizing the difference between models for different epochs as well as the rms arrival-time residuals. This problem leads, in the case of two epochs, to a system of normal equations whose order is twice as great as for a single epoch. The direct solution of this system would require twice as much memory and four times as much computational effort as would independent inversions. We present an algorithm, tomo4d, that takes advantage of the structure and sparseness of the system to obtain the solution with essentially no more effort than independent inversions require. No claim to original US government works Journal compilation ?? 2010 RAS.

  7. Thermal Alteration of Pyrite to Pyrrhotite During Earthquakes: New Evidence of Seismic Slip in the Rock Record

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Dekkers, Mark J.; Chen, Jianye

    2018-02-01

    Seismic slip zones convey important information on earthquake energy dissipation and rupture processes. However, geological records of earthquakes along exhumed faults remain scarce. They can be traced with a variety of methods that establish the frictional heating of seismic slip, although each has certain assets and disadvantages. Here we describe a mineral magnetic method to identify seismic slip along with its peak temperature through examination of magnetic mineral assemblages within a fault zone in deep-sea sediments cored from the Japan Trench—one of the seismically most active regions around Japan—during the Integrated Ocean Drilling Program Expedition 343, the Japan Trench Fast Drilling Project. Fault zone sediments and adjacent host sediments were analyzed mineral magnetically, supplemented by scanning electron microscope observations with associated energy dispersive X-ray spectroscopy analyses. The presence of the magnetic mineral pyrrhotite appears to be restricted to three fault zones occurring at 697, 720, and 801 m below sea floor in the frontal prism sediments, while it is absent in the adjacent host sediments. Elevated temperatures and coseismic hot fluids as a consequence of frictional heating during earthquake rupture induced partial reaction of preexisting pyrite to pyrrhotite. The presence of pyrrhotite in combination with pyrite-to-pyrrhotite reaction kinetics constrains the peak temperature to between 640 and 800°C. The integrated mineral-magnetic, microscopic, and kinetic approach adopted here is a useful tool to identify seismic slip along faults without frictional melt and establish the associated maximum temperature.

  8. Target-oriented retrieval of subsurface wave fields - Pushing the resolution limits in seismic imaging

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Ivan; Ozmen, Neslihan; van der Neut, Joost; Cui, Tianci

    2017-04-01

    Travelling wide-bandwidth seismic waves have long been used as a primary tool in exploration seismology because they can probe the subsurface over large distances, while retaining relatively high spatial resolution. The well-known Born resolution limit often seems to be the lower bound on spatial imaging resolution in real life examples. In practice, data acquisition cost, time constraints and other factors can worsen the resolution achieved by wavefield imaging. Could we obtain images whose resolution beats the Born limits? Would it be practical to achieve it, and what are we missing today to achieve this? In this talk, we will cover aspects of linear and nonlinear seismic imaging to understand elements that play a role in obtaining "super-resolved" seismic images. New redatuming techniques, such as the Marchenko method, enable the retrieval of subsurface fields that include multiple scattering interactions, while requiring relatively little knowledge of model parameters. Together with new concepts in imaging, such as Target-Enclosing Extended Images, these new redatuming methods enable new targeted imaging frameworks. We will make a case as to why target-oriented approaches to reconstructing subsurface-domain wavefields from surface data may help in increasing the resolving power of seismic imaging, and in pushing the limits on parameter estimation. We will illustrate this using a field data example. Finally, we will draw connections between seismic and other imaging modalities, and discuss how this framework could be put to use in other applications

  9. Earth physicist describes US nuclear test monitoring system

    NASA Astrophysics Data System (ADS)

    1986-01-01

    The U. S. capabilities to monitor underground nuclear weapons tests in the USSR was examined. American methods used in monitoring the underground nuclear tests are enumerated. The U. S. technical means of monitoring Solviet nuclear weapons testing, and whether it is possible to conduct tests that could not be detected by these means are examined. The worldwide seismic station network in 55 countries available to the U. S. for seismic detection and measurement of underground nuclear explosions, and also the systems of seismic research observatories in 15 countries and seismic grouping stations in 12 countries are outlined including the advanced computerized data processing capabilities of these facilities. The level of capability of the U. S. seismic system for monitoring nuclear tests, other, nonseismic means of monitoring, such as hydroacoustic and recording of effects in the atmosphere, ionosphere, and the Earth's magnetic field, are discussed.

  10. Characterizing a large shear-zone with seismic and magnetotelluric methods: The case of the Dead Sea Transform

    USGS Publications Warehouse

    Maercklin, N.; Bedrosian, P.A.; Haberland, C.; Ritter, O.; Ryberg, T.; Weber, M.; Weckmann, U.

    2005-01-01

    Seismic tomography, imaging of seismic scatterers, and magnetotelluric soundings reveal a sharp lithologic contrast along a ???10 km long segment of the Arava Fault (AF), a prominent fault of the southern Dead Sea Transform (DST) in the Middle East. Low seismic velocities and resistivities occur on its western side and higher values east of it, and the boundary between the two units coincides partly with a seismic scattering image. At 1-4 km depth the boundary is offset to the east of the AF surface trace, suggesting that at least two fault strands exist, and that slip occurred on multiple strands throughout the margin's history. A westward fault jump, possibly associated with straightening of a fault bend, explains both our observations and the narrow fault zone observed by others. Copyright 2005 by the American Geophysical Union.

  11. Some Probabilistic and Statistical Properties of the Seismic Regime of Zemmouri (Algeria) Seismoactive Zone

    NASA Astrophysics Data System (ADS)

    Baddari, Kamel; Bellalem, Fouzi; Baddari, Ibtihel; Makdeche, Said

    2016-10-01

    Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter K sr, Hurst exponents H r and H t. The values of D, γ, K sr, H r, and H t diminished many months before the principal seismic shock ( M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.

  12. The impact of lake level variation on seismicity around XianNvShan fault in the Three Gorge area

    NASA Astrophysics Data System (ADS)

    Liao, W.; Li, J.; Zhang, L.

    2017-12-01

    Since the impounding of Three Gorge Project in 2003,more than 10000 earthquakes have been recorded by the digital telemetry seismic network. Most of them occurred around the GaoQiao fault and the Northern segment of XianNvShan fault . In March 2014, the M4.3 and M4.7 earthquake happened in the northern segment of Xiannvshshan fault .In order to study the relationship between the seismicity around the XianNvShan fault and the lake level variation, we had been deployed 5 temporal seismic stations in this area from 2015 to 2016. More than 3000 earthquakes recorded during the time of temporal seismic monitoring are located by hypo-center of by waveform cross-correlation and double-difference method. The depth of most earthquakes is from 5 to 7 km.but it is obvious that the variation of depth is relate to the fluctuation of water level.

  13. Dual Roadside Seismic Sensor for Moving Road Vehicle Detection and Characterization

    PubMed Central

    Wang, Hua; Quan, Wei; Wang, Yinhai; Miller, Gregory R.

    2014-01-01

    This paper presents a method for using a dual roadside seismic sensor to detect moving vehicles on roadway by installing them on a road shoulder. Seismic signals are split into fixed time intervals in recording. In each interval, the time delay of arrival (TDOA) is estimated using a generalized cross-correlation approach with phase transform (GCC-PHAT). Various kinds of vehicle characterization information, including vehicle speed, axle spacing, detection of both vehicle axles and moving direction, can also be extracted from the collected seismic signals as demonstrated in this paper. The error of both vehicle speed and axle spacing detected by this approach has been shown to be less than 20% through the field tests conducted on an urban street in Seattle. Compared to most existing sensors, this new design of dual seismic sensor is cost effective, easy to install, and effective in gathering information for various traffic management applications. PMID:24526304

  14. Thin-Layering Effect On Estimating Seismic Attenuation In Methane Hydrate-Bearing Sediments

    NASA Astrophysics Data System (ADS)

    Lee, K.; Matsushima, J.

    2012-12-01

    Seismic attenuation is one of the important parameters that provide information concerning both the detection and quantitative assessment of gas-hydrates. We estimated seismic attenuation (1/Q) from surface seismic data acquired at Nankai Trough in Japan. We adapt the Q-versus offset (QVO) method to calculate robust and continuous interval attenuations from CMP gathers. We could observe high attenuation in methane hydrate bearing sediments over the BSR region. However some negative 1/Q values are also shown. This means that the amplitude of high frequency components is increasing with depth. Such results may be due to tuning effect. Here, we carried out numerical test to see how thin-layering effect influences on seismic attenuation results. The results showed that tuning considerably influences the attenuation results, and causes the lower 1/Q values (lower attenuation) and negative 1/Q values.

  15. Analysis of seismic signals related to rockfalls in the Dolomieu crater, Piton de la Fournaise, La Réunion

    NASA Astrophysics Data System (ADS)

    Durand, Virginie; Mangeney, Anne; Lebouteiller, Pauline; Hibert, Clément; Ovpf Team

    2015-04-01

    The seismic and photogrammetric networks of the volcano of the Piton de la Fournaise (La Réunion Island), maintained by the OVPF, are well appropriate for the study of seismic signals generated by rockfalls. In this work, we focus on the signals generated by rockfalls occurring in the Dolomieu crater. The aim of this study is to understand the link between rockfall and volcanic activity. One key question is as to whether the number and characteristics of rockfalls can provide a precursor to the occurrence of an eruption. Another scope of this work is to determine if there is a link between the rockfall activity and the precipitations, changes of temperature and seismic activity. For this, we analyze the rockfall activity preceding the June 2014 eruption. To detect the events, we use a method based on the Kurtosis function that picks the beginning of the signals. Then we localize the events using the arrival time of the waves and a propagation model computed with the Fast Marching Method. Finally, we calculate the seismic energy generated by these rockfalls. Thus, we obtain a catalog of events that we can exploit to determine the characteristics and the temporal evolution of the rockfall activity in the Dolomieu crater. A power law is observed between the seismic energy and the duration of rockfalls, making possible to calculate the rockfall volume from the ratio between seismic and potential energy. From previous studies on the Piton de la Fournaise volcano, we can infer that rockfall activity in the crater is correlated with eruptions: the rockfall activity seems to begin before the eruption time. We compare the spatio-temporal changes of the rockfall characteristics to the volcanic, seismic, and rain activity. We show in particular that the rockfall size seems to be different if the intrusion of magma reaches the surface or not, providing potential precursors to the occurrence of an eruption.

  16. Epicenter Location of Regional Seismic Events Using Love Wave and Rayleigh Wave Ambient Seismic Noise Green's Functions

    NASA Astrophysics Data System (ADS)

    Levshin, A. L.; Barmin, M. P.; Moschetti, M. P.; Mendoza, C.; Ritzwoller, M. H.

    2011-12-01

    We describe a novel method to locate regional seismic events based on exploiting Empirical Green's Functions (EGF) that are produced from ambient seismic noise. Elastic EGFs between pairs of seismic stations are determined by cross-correlating long time-series of ambient noise recorded at the two stations. The EGFs principally contain Rayleigh waves on the vertical-vertical cross-correlations and Love waves on the transverse-transverse cross-correlations. Earlier work (Barmin et al., "Epicentral location based on Rayleigh wave empirical Green's functions from ambient seismic noise", Geophys. J. Int., 2011) showed that group time delays observed on Rayleigh wave EGFs can be exploited to locate to within about 1 km moderate sized earthquakes using USArray Transportable Array (TA) stations. The principal advantage of the method is that the ambient noise EGFs are affected by lateral variations in structure similarly to the earthquake signals, so the location is largely unbiased by 3-D structure. However, locations based on Rayleigh waves alone may be biased by more than 1 km if the earthquake depth is unknown but lies between 2 km and 7 km. This presentation is motivated by the fact that group time delays for Love waves are much less affected by earthquake depth than Rayleigh waves; thus exploitation of Love wave EGFs may reduce location bias caused by uncertainty in event depth. The advantage of Love waves to locate seismic events, however, is mitigated by the fact that Love wave EGFs have a smaller SNR than Rayleigh waves. Here, we test the use of Love and Rayleigh wave EGFs between 5- and 15-sec period to locate seismic events based on the USArray TA in the western US. We focus on locating aftershocks of the 2008 M 6.0 Wells earthquake, mining blasts in Wyoming and Montana, and small earthquakes near Norman, OK and Dallas, TX, some of which may be triggered by hydrofracking or injection wells.

  17. The Search for Fluid Injection-induced Seismicity in California Oilfields

    NASA Astrophysics Data System (ADS)

    Layland-Bachmann, C. E.; Brodsky, E. E.; Foxall, W.; Goebel, T.; Jordan, P. D.

    2017-12-01

    During recent years, earthquakes associated with human activity have become a matter of heightened public concern. Wastewater injection is a major concern, as seismic events with magnitudes larger than M5.5 have been linked to this practice. Much of the research in the United States is focused on the mid-continental regions, where low rates of naturally-occurring seismicity and high-volume injection activities facilitate easier identification by statistical correlation of potentially induced seismic events . However, available industry data are often limited in these regions and therefore limits our ability to connect specific human activities to earthquakes. Specifically, many previous studies have focused primarily on injection activity in single wells, ignoring the interconnectivity of production and injection in a reservoir. The situation in California differs from the central U.S. in two ways: (1) A rich dataset of oilfield activity is publically available from state agencies, which enables a more in-depth investigation of the human forcing; and (2) the identification of potential anthropogenically-induced earthquakes is complex as a result of high tectonic activity. Here we address both differences. We utilize a public database of hydrologically connected reservoirs to assess whether there are any statistically significant correlations between the net injected volumes, reservoir pressures and injection depths, and the earthquake locations and frequencies of occurrence. We introduce a framework of physical and empirical models and statistical techniques to identify potentially induced seismic events. While the aim is to apply the methods statewide, we first apply our methods in the Southern San Joaquin Valley. Although, we find an anomalously high earthquake rate in Southern Kern County oilfields, which is consistent with previous studies, we do not find a simple straightforward correlation. To successfully study induced seismicity we need a seismic catalog that is complete and consistent down to small magnitudes. During this study, we found some important seismic coverage gaps in critical oilfields in the Central Valley that need to be addressed in order to provide societally relevant assessments.

  18. Teleseismic Array Studies of Earth's Core-Mantle Boundary

    NASA Astrophysics Data System (ADS)

    Alexandrakis, Catherine

    2011-12-01

    The core mantle boundary (CMB) is an inaccessible and complex region, knowledge of which is vital to our understanding of many Earth processes. Above it is the heterogeneous lower-mantle. Below the boundary is the outer-core, composed of liquid iron, and/or nickel and some lighter elements. Elucidation of how these two distinct layers interact may enable researchers to better understand the geodynamo, global tectonics, and overall Earth history. One parameter that can be used to study structure and limit potential chemical compositions is seismic-wave velocity. Current global-velocity models have significant uncertainties in the 200 km above and below the CMB. In this thesis, these regions are studied using three methods. The upper outer core is studied using two seismic array methods. First, a modified vespa, or slant-stack method is applied to seismic observations at broadband seismic arrays, and at large, dense groups of broadband seismic stations dubbed 'virtual' arrays. Observations of core-refracted teleseismic waves, such as SmKS, are used to extract relative arrivaltimes. As with previous studies, lower -mantle heterogeneities influence the extracted arrivaltimes, giving significant scatter. To remove raypath effects, a new method was developed, called Empirical Transfer Functions (ETFs). When applied to SmKS waves, this method effectively isolates arrivaltime perturbations caused by outer core velocities. By removing raypath effects, the signals can be stacked further reducing scatter. The results of this work were published as a new 1D outer-core model, called AE09. This model describes a well-mixed outer core. Two array methods are used to detect lower mantle heterogeneities, in particular Ultra-Low Velocity Zones (ULVZs). The ETF method and beam forming are used to isolate a weak P-wave that diffracts along the CMB. While neither the ETF method nor beam forming could adequately image the low-amplitude phase, beam forms of two events indicate precursors to the SKS and SKKS phase, which may be ULVZ indicators. Finally, cross-correlated observed and modelled beams indicate a tendency towards a ULVZ-like lower mantle in the study region.

  19. Signal restoration through deconvolution applied to deep mantle seismic probes

    NASA Astrophysics Data System (ADS)

    Stefan, W.; Garnero, E.; Renaut, R. A.

    2006-12-01

    We present a method of signal restoration to improve the signal-to-noise ratio, sharpen seismic arrival onset, and act as an empirical source deconvolution of specific seismic arrivals. Observed time-series gi are modelled as a convolution of a simpler time-series fi, and an invariant point spread function (PSF) h that attempts to account for the earthquake source process. The method is used on the shear wave time window containing SKS and S, whereby using a Gaussian PSF produces more impulsive, narrower, signals in the wave train. The resulting restored time-series facilitates more accurate and objective relative traveltime estimation of the individual seismic arrivals. We demonstrate the accuracy of the reconstruction method on synthetic seismograms generated by the reflectivity method. Clean and sharp reconstructions are obtained with real data, even for signals with relatively high noise content. Reconstructed signals are simpler, more impulsive, and narrower, which allows highlighting of some details of arrivals that are not readily apparent in raw waveforms. In particular, phases nearly coincident in time can be separately identified after processing. This is demonstrated for two seismic wave pairs used to probe deep mantle and core-mantle boundary structure: (1) the Sab and Scd arrivals, which travel above and within, respectively, a 200-300-km-thick, higher than average shear wave velocity layer at the base of the mantle, observable in the 88-92 deg epicentral distance range and (2) SKS and SPdiff KS, which are core waves with the latter having short arcs of P-wave diffraction, and are nearly identical in timing near 108-110 deg in distance. A Java/Matlab algorithm was developed for the signal restoration, which can be downloaded from the authors web page, along with example data and synthetic seismograms.

  20. Use of expert judgment elicitation to estimate seismic vulnerability of selected building types

    USGS Publications Warehouse

    Jaiswal, K.S.; Aspinall, W.; Perkins, D.; Wald, D.; Porter, K.A.

    2012-01-01

    Pooling engineering input on earthquake building vulnerability through an expert judgment elicitation process requires careful deliberation. This article provides an overview of expert judgment procedures including the Delphi approach and the Cooke performance-based method to estimate the seismic vulnerability of a building category.

Top