NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Seismic retrofit guidelines for Utah highway bridges.
DOT National Transportation Integrated Search
2009-05-01
Much of Utahs population dwells in a seismically active region, and many of the bridges connecting transportation lifelines predate the rigorous seismic design standards that have been developed in the past 10-20 years. Seismic retrofitting method...
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Seismic design verification of LMFBR structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-07-01
The report provides an assessment of the seismic design verification procedures currently used for nuclear power plant structures, a comparison of dynamic test methods available, and conclusions and recommendations for future LMFB structures.
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)
NASA Astrophysics Data System (ADS)
Sullivan, T. J.
2012-04-01
The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.
A performance goal-based seismic design philosophy for waste repository facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Q.A.
1994-12-31
A performance goal-based seismic design philosophy, compatible with DOE`s present natural phenomena hazards mitigation and {open_quotes}graded approach{close_quotes} philosophy, has been proposed for high level nuclear waste repository facilities. The rationale, evolution, and the desirable features of this method have been described. Why and how the method should and can be applied to the design of a repository facility are also discussed.
Poor boy 3D seismic effort yields South Central Kentucky discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanratty, M.
1996-11-04
Clinton County, Ky., is on the eastern flank of the Cincinnati arch and the western edge of the Appalachian basin and the Pine Mountain overthrust. Clinton County has long been known for high volume fractured carbonate wells. The discovery of these fractured reservoir, unfortunately, has historically been serendipitous. The author currently uses 2D seismic and satellite imagery to design 3D high resolution seismic shoots. This method has proven to be the most efficient and is the core of his program. The paper describes exploration methods, seismic acquisition, well data base, and seismic interpretation.
LANL seismic screening method for existing buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, S.L.; Feller, K.C.; Fritz de la Orta, G.O.
1997-01-01
The purpose of the Los Alamos National Laboratory (LANL) Seismic Screening Method is to provide a comprehensive, rational, and inexpensive method for evaluating the relative seismic integrity of a large building inventory using substantial life-safety as the minimum goal. The substantial life-safety goal is deemed to be satisfied if the extent of structural damage or nonstructural component damage does not pose a significant risk to human life. The screening is limited to Performance Category (PC) -0, -1, and -2 buildings and structures. Because of their higher performance objectives, PC-3 and PC-4 buildings automatically fail the LANL Seismic Screening Method andmore » will be subject to a more detailed seismic analysis. The Laboratory has also designated that PC-0, PC-1, and PC-2 unreinforced masonry bearing wall and masonry infill shear wall buildings fail the LANL Seismic Screening Method because of their historically poor seismic performance or complex behavior. These building types are also recommended for a more detailed seismic analysis. The results of the LANL Seismic Screening Method are expressed in terms of separate scores for potential configuration or physical hazards (Phase One) and calculated capacity/demand ratios (Phase Two). This two-phase method allows the user to quickly identify buildings that have adequate seismic characteristics and structural capacity and screen them out from further evaluation. The resulting scores also provide a ranking of those buildings found to be inadequate. Thus, buildings not passing the screening can be rationally prioritized for further evaluation. For the purpose of complying with Executive Order 12941, the buildings failing the LANL Seismic Screening Method are deemed to have seismic deficiencies, and cost estimates for mitigation must be prepared. Mitigation techniques and cost-estimate guidelines are not included in the LANL Seismic Screening Method.« less
Improving fault image by determination of optimum seismic survey parameters using ray-based modeling
NASA Astrophysics Data System (ADS)
Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali
2018-06-01
In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.
NASA Astrophysics Data System (ADS)
Setiawan, Jody; Nakazawa, Shoji
2017-10-01
This paper discusses about comparison of seismic response behaviors, seismic performance and seismic loss function of a conventional special moment frame steel structure (SMF) and a special moment frame steel structure with base isolation (BI-SMF). The validation of the proposed simplified estimation method of the maximum deformation of the base isolation system by using the equivalent linearization method and the validation of the design shear force of the superstructure are investigated from results of the nonlinear dynamic response analysis. In recent years, the constructions of steel office buildings with seismic isolation system are proceeding even in Indonesia where the risk of earthquakes is high. Although the design code for the seismic isolation structure has been proposed, there is no actual construction example for special moment frame steel structure with base isolation. Therefore, in this research, the SMF and BI-SMF buildings are designed by Indonesian Building Code which are assumed to be built at Padang City in Indonesia. The material of base isolation system is high damping rubber bearing. Dynamic eigenvalue analysis and nonlinear dynamic response analysis are carried out to show the dynamic characteristics and seismic performance. In addition, the seismic loss function is obtained from damage state probability and repair cost. For the response analysis, simulated ground accelerations, which have the phases of recorded seismic waves (El Centro NS, El Centro EW, Kobe NS and Kobe EW), adapted to the response spectrum prescribed by the Indonesian design code, that has, are used.
3-D Characterization of Seismic Properties at the Smart Weapons Test Range, YPG
2001-10-01
confidence limits around each interpolated value. Ground truth was accomplished through cross-hole seismic measurements and borehole logs. Surface wave... seismic method, as well as estimating the optimal orientation and spacing of the seismic array . A variety of sources and receivers was evaluated...location within the array is partially related to at least two seismic lines. Either through good fortune or foresight by the designers of the SWTR site
Seismic Hazard Analysis — Quo vadis?
NASA Astrophysics Data System (ADS)
Klügel, Jens-Uwe
2008-05-01
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.
Research on Influencing Factors and Generalized Power of Synthetic Artificial Seismic Wave
NASA Astrophysics Data System (ADS)
Jiang, Yanpei
2018-05-01
Start your abstract here… In this paper, according to the trigonometric series method, the author adopts different envelope functions and the acceleration design spectrum in Seismic Code For Urban Bridge Design to simulate the seismic acceleration time history which meets the engineering accuracy requirements by modifying and iterating the initial wave. Spectral analysis is carried out to find out the the distribution law of the changing frequencies of the energy of seismic time history and to determine the main factors that affect the acceleration amplitude spectrum and energy spectrum density. The generalized power formula of seismic time history is derived from the discrete energy integral formula and the author studied the changing characteristics of generalized power of the seismic time history under different envelop functions. Examples are analyzed to illustrate that generalized power can measure the seismic performance of bridges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.
A proposal for seismic evaluation index of mid-rise existing RC buildings in Afghanistan
NASA Astrophysics Data System (ADS)
Naqi, Ahmad; Saito, Taiki
2017-10-01
Mid-rise RC buildings gradually rise in Kabul and entire Afghanistan since 2001 due to rapid increase of population. To protect the safety of resident, Afghan Structure Code was issued in 2012. But the building constructed before 2012 failed to conform the code requirements. In Japan, new sets of rules and law for seismic design of buildings had been issued in 1981 and severe earthquake damage was disclosed for the buildings designed before 1981. Hence, the Standard for Seismic Evaluation of RC Building published in 1977 has been widely used in Japan to evaluate the seismic capacity of existing buildings designed before 1981. Currently similar problem existed in Afghanistan, therefore, this research examined the seismic capacity of six RC buildings which were built before 2012 in Kabul by applying the seismic screening procedure presented by Japanese standard. Among three screening procedures with different capability, the less detailed screening procedure, the first level of screening, is applied. The study founds an average seismic index (IS-average=0.21) of target buildings. Then, the results were compared with those of more accurate seismic evaluation procedures of Capacity Spectrum Method (CSM) and Time History Analysis (THA). The results for CSM and THA show poor seismic performance of target buildings not able to satisfy the safety design limit (1/100) of the maximum story drift. The target buildings are then improved by installing RC shear walls. The seismic indices of these retrofitted buildings were recalculated and the maximum story drifts were analyzed by CSM and THA. The seismic indices and CSM and THA results are compared and found that building with seismic index larger than (IS-average =0.4) are able to satisfy the safety design limit. Finally, to screen and minimize the earthquake damage over the existing buildings, the judgement seismic index (IS-Judgment=0.5) for the first level of screening is proposed.
Picking vs Waveform based detection and location methods for induced seismicity monitoring
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2017-04-01
Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for regional and teleseismic applications, thus its performance with microseismic data might be limited. We analyze the performance of the three methodologies for a synthetic dataset with realistic noise conditions as well as for the first hour of continuous waveform data, including the Ml 3.5 St. Gallen earthquake, recorded by a microseismic network deployed in the area. We finally compare the results obtained all these three methods with a manually revised catalogue.
Wang, Z.; Shi, B.; Kiefer, J.D.
2005-01-01
PSHA is the method used most to assess seismic hazards for input into various aspects of public and financial policy. For example, PSHA was used by the U.S. Geological Survey to develop the National Seismic Hazard Maps (Frankel et al., 1996, 2002). These maps are the basis for many national, state, and local seismic safety regulations and design standards, such as the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, the International Building Code, and the International Residential Code. Adoption and implementation of these regulations and design standards would have significant impacts on many communities in the New Madrid area, including Memphis, Tennessee and Paducah, Kentucky. Although "mitigating risks to society from earthquakes involves economic and policy issues" (Stein, 2004), seismic hazard assessment is the basis. Seismologists should provide the best information on seismic hazards and communicate them to users and policy makers. There is a lack of effort in communicating the uncertainties in seismic hazard assessment in the central U.S., however. Use of 10%, 5%, and 2% PE in 50 years causes confusion in communicating seismic hazard assessment. It would be easy to discuss and understand the design ground motions if the true meaning of the ground motion derived from PSHA were presented, i.e., the ground motion with the estimated uncertainty or the associated confidence level.
State of art of seismic design and seismic hazard analysis for oil and gas pipeline system
NASA Astrophysics Data System (ADS)
Liu, Aiwen; Chen, Kun; Wu, Jian
2010-06-01
The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.
Final Report: Seismic Hazard Assessment at the PGDP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhinmeng
2007-06-01
Selecting a level of seismic hazard at the Paducah Gaseous Diffusion Plant for policy considerations and engineering design is not an easy task because it not only depends on seismic hazard, but also on seismic risk and other related environmental, social, and economic issues. Seismic hazard is the main focus. There is no question that there are seismic hazards at the Paducah Gaseous Diffusion Plant because of its proximity to several known seismic zones, particularly the New Madrid Seismic Zone. The issues in estimating seismic hazard are (1) the methods being used and (2) difficulty in characterizing the uncertainties ofmore » seismic sources, earthquake occurrence frequencies, and ground-motion attenuation relationships. This report summarizes how input data were derived, which methodologies were used, and what the hazard estimates at the Paducah Gaseous Diffusion Plant are.« less
NASA Astrophysics Data System (ADS)
Bai, Wen; Dai, Junwu; Zhou, Huimeng; Yang, Yongqiang; Ning, Xiaoqing
2017-10-01
Porcelain electrical equipment (PEE), such as current transformers, is critical to power supply systems, but its seismic performance during past earthquakes has not been satisfactory. This paper studies the seismic performance of two typical types of PEE and proposes a damping method for PEE based on multiple tuned mass dampers (MTMD). An MTMD damping device involving three mass units, named a triple tuned mass damper (TTMD), is designed and manufactured. Through shake table tests and finite element analysis, the dynamic characteristics of the PEE are studied and the effectiveness of the MTMD damping method is verified. The adverse influence of MTMD redundant mass to damping efficiency is studied and relevant equations are derived. MTMD robustness is verified through adjusting TTMD control frequencies. The damping effectiveness of TTMD, when the peak ground acceleration far exceeds the design value, is studied. Both shake table tests and finite element analysis indicate that MTMD is effective and robust in attenuating PEE seismic responses. TTMD remains effective when the PGA far exceeds the design value and when control deviations are considered.
An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
Application of USNRC NUREG/CR-6661 and draft DG-1108 to evolutionary and advanced reactor designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang 'Apollo', Chen
2006-07-01
For the seismic design of evolutionary and advanced nuclear reactor power plants, there are definite financial advantages in the application of USNRC NUREG/CR-6661 and draft Regulatory Guide DG-1108. NUREG/CR-6661, 'Benchmark Program for the Evaluation of Methods to Analyze Non-Classically Damped Coupled Systems', was by Brookhaven National Laboratory (BNL) for the USNRC, and Draft Regulatory Guide DG-1108 is the proposed revision to the current Regulatory Guide (RG) 1.92, Revision 1, 'Combining Modal Responses and Spatial Components in Seismic Response Analysis'. The draft Regulatory Guide DG-1108 is available at http://members.cox.net/apolloconsulting, which also provides a link to the USNRC ADAMS site to searchmore » for NUREG/CR-6661 in text file or image file. The draft Regulatory Guide DG-1108 removes unnecessary conservatism in the modal combinations for closely spaced modes in seismic response spectrum analysis. Its application will be very helpful in coupled seismic analysis for structures and heavy equipment to reduce seismic responses and in piping system seismic design. In the NUREG/CR-6661 benchmark program, which investigated coupled seismic analysis of structures and equipment or piping systems with different damping values, three of the four participants applied the complex mode solution method to handle different damping values for structures, equipment, and piping systems. The fourth participant applied the classical normal mode method with equivalent weighted damping values to handle differences in structural, equipment, and piping system damping values. Coupled analysis will reduce the equipment responses when equipment, or piping system and structure are in or close to resonance. However, this reduction in responses occurs only if the realistic DG-1108 modal response combination method is applied, because closely spaced modes will be produced when structure and equipment or piping systems are in or close to resonance. Otherwise, the conservatism in the current Regulatory Guide 1.92, Revision 1, will overshadow the advantage of coupled analysis. All four participants applied the realistic modal combination method of DG-1108. Consequently, more realistic and reduced responses were obtained. (authors)« less
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
Phase-Shifted Based Numerical Method for Modeling Frequency-Dependent Effects on Seismic Reflections
NASA Astrophysics Data System (ADS)
Chen, Xuehua; Qi, Yingkai; He, Xilei; He, Zhenhua; Chen, Hui
2016-08-01
The significant velocity dispersion and attenuation has often been observed when seismic waves propagate in fluid-saturated porous rocks. Both the magnitude and variation features of the velocity dispersion and attenuation are frequency-dependent and related closely to the physical properties of the fluid-saturated porous rocks. To explore the effects of frequency-dependent dispersion and attenuation on the seismic responses, in this work, we present a numerical method for seismic data modeling based on the diffusive and viscous wave equation (DVWE), which introduces the poroelastic theory and takes into account diffusive and viscous attenuation in diffusive-viscous-theory. We derive a phase-shift wave extrapolation algorithm in frequencywavenumber domain for implementing the DVWE-based simulation method that can handle the simultaneous lateral variations in velocity, diffusive coefficient and viscosity. Then, we design a distributary channels model in which a hydrocarbon-saturated sand reservoir is embedded in one of the channels. Next, we calculated the synthetic seismic data to analytically and comparatively illustrate the seismic frequency-dependent behaviors related to the hydrocarbon-saturated reservoir, by employing DVWE-based and conventional acoustic wave equation (AWE) based method, respectively. The results of the synthetic seismic data delineate the intrinsic energy loss, phase delay, lower instantaneous dominant frequency and narrower bandwidth due to the frequency-dependent dispersion and attenuation when seismic wave travels through the hydrocarbon-saturated reservoir. The numerical modeling method is expected to contribute to improve the understanding of the features and mechanism of the seismic frequency-dependent effects resulted from the hydrocarbon-saturated porous rocks.
NASA Astrophysics Data System (ADS)
Huang, Duruo; Du, Wenqi; Zhu, Hong
2017-10-01
In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.
Design and development of digital seismic amplifier recorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsidar, Siti Alaa; Afuar, Waldy; Handayani, Gunawan, E-mail: gunawanhandayani@gmail.com
2015-04-16
A digital seismic recording is a recording technique of seismic data in digital systems. This method is more convenient because it is more accurate than other methods of seismic recorders. To improve the quality of the results of seismic measurements, the signal needs to be amplified to obtain better subsurface images. The purpose of this study is to improve the accuracy of measurement by amplifying the input signal. We use seismic sensors/geophones with a frequency of 4.5 Hz. The signal is amplified by means of 12 units of non-inverting amplifier. The non-inverting amplifier using IC 741 with the resistor values 1KΩmore » and 1MΩ. The amplification results were 1,000 times. The results of signal amplification converted into digital by using the Analog Digital Converter (ADC). Quantitative analysis in this study was performed using the software Lab VIEW 8.6. The Lab VIEW 8.6 program was used to control the ADC. The results of qualitative analysis showed that the seismic conditioning can produce a large output, so that the data obtained is better than conventional data. This application can be used for geophysical methods that have low input voltage such as microtremor application.« less
Seismic passive earth resistance using modified pseudo-dynamic method
NASA Astrophysics Data System (ADS)
Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.
2017-04-01
In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.
Forecasting of Energy Expenditure of Induced Seismicity with Use of Artificial Neural Network
NASA Astrophysics Data System (ADS)
Cichy, Tomasz; Banka, Piotr
2017-12-01
Coal mining in many Polish mines in the Upper Silesian Coal Basin is accompanied by high levels of induced seismicity. In mining plants, the methods of shock monitoring are improved, allowing for more accurate localization of the occurring phenomena and determining their seismic energy. Equally important is the development of ways of forecasting seismic hazards that may occur while implementing mine design projects. These methods, depending on the length of time for which the forecasts are made, can be divided into: longterm, medium-term, short-term and so-called alarm. Long-term forecasts are particularly useful for the design of seam exploitations. The paper presents a method of predicting changes in energy expenditure of shock using a properly trained artificial neural network. This method allows to make long-term forecasts at the stage of the mine’s exploitation design, thus enabling the mining work plans to be reviewed to minimize the potential for tremors. The information given at the input of the neural network is indicative of the specific energy changes of the elastic deformation occurring in the selected, thick, resistant rock layers (tremor-prone layers). Energy changes, taking place in one or more tremor-prone layers are considered. These indicators describe only the specific energy changes of the elastic deformation accumulating in the rock as a consequence of the mining operation, but does not determine the amount of energy released during the destruction of a given volume of rock. In this process, the potential energy of elastic strain transforms into other, non-measurable energy types, including the seismic energy of recorded tremors. In this way, potential energy changes affect the observed induced seismicity. The parameters used are characterized by increases (declines) of specific energy with separation to occur before the hypothetical destruction of the rock and after it. Additional input information is an index characterizing the rate of tectonic faults. This parameter was not included in previous research by authors. At the output of the artificial neural network, the values of the energy density of the mining tremors [J/m3] are obtained. An example of the predicted change in seismicity induced for a highly threatened region is presented. Relatively good predicted and observed energy expenditure of tremors was obtained. The presented method can complement existing methods (analytical and geophysical) forecasting seismic hazard. This method can be used primarily in those areas where the seismic level is determined by the configuration of the edges and residues in the operating seam, as well as in adjacent seams, and to a lesser extent, the geological structure of the rock The method is local, it means that the artificial neural network prediction can only be performed for the region from which the data have been used for its originated learning. The developed method cannot be used in areas where mining is just beginning and it is not possible to predict the level of seismicity induced in areas where no mining tremors have been recorded so far.
NASA Astrophysics Data System (ADS)
Maskar, A. D.; Madhekar, S. N.; Phatak, D. R.
2017-11-01
The knowledge of seismic active earth pressure behind the rigid retaining wall is very essential in the design of retaining wall in earthquake prone regions. Commonly used Mononobe-Okabe (MO) method considers pseudo-static approach. Recently there are many pseudo-dynamic methods used to evaluate the seismic earth pressure. However, available pseudo-static and pseudo-dynamic methods do not incorporate the effect of wall movement on the earth pressure distribution. Dubrova (Interaction between soils and structures, Rechnoi Transport, Moscow, 1963) was the first, who considered such effect and till date, it is used for cohesionless soil, without considering the effect of seismicity. In this paper, Dubrova's model based on redistribution principle, considering the seismic effect has been developed. It is further used to compute the distribution of seismic active earth pressure, in a more realistic manner, by considering the effect of wall movement on the earth pressure, as it is displacement based method. The effects of a wide range of parameters like soil friction angle (ϕ), wall friction angle (δ), horizontal and vertical seismic acceleration coefficients (kh and kv); on seismic active earth pressure (Kae) have been studied. Results are presented for comparison of pseudo-static and pseudo-dynamic methods, to highlight the realistic, non-linearity of seismic active earth pressure distribution. The current study results in the variation of Kae with kh in the same manner as that of MO method and Choudhury and Nimbalkar (Geotech Geol Eng 24(5):1103-1113, 2006) study. To increase in ϕ, there is a reduction in static as well as seismic earth pressure. Also, by keeping constant ϕ value, as kh increases from 0 to 0.3, earth pressure increases; whereas as δ increases, active earth pressure decreases. The seismic active earth pressure coefficient (Kae) obtained from the present study is approximately same as that obtained by previous researchers. Though seismic earth pressure obtained by pseudo-dynamic approach and seismic earth pressure obtained by redistribution principle have different background of formulation, the final earth pressure distribution is approximately same.
Steel Shear Walls, Behavior, Modeling and Design
NASA Astrophysics Data System (ADS)
Astaneh-Asl, Abolhassan
2008-07-01
In recent years steel shear walls have become one of the more efficient lateral load resisting systems in tall buildings. The basic steel shear wall system consists of a steel plate welded to boundary steel columns and boundary steel beams. In some cases the boundary columns have been concrete-filled steel tubes. Seismic behavior of steel shear wall systems during actual earthquakes and based on laboratory cyclic tests indicates that the systems are quite ductile and can be designed in an economical way to have sufficient stiffness, strength, ductility and energy dissipation capacity to resist seismic effects of strong earthquakes. This paper, after summarizing the past research, presents the results of two tests of an innovative steel shear wall system where the boundary elements are concrete-filled tubes. Then, a review of currently available analytical models of steel shear walls is provided with a discussion of capabilities and limitations of each model. We have observed that the tension only "strip model", forming the basis of the current AISC seismic design provisions for steel shear walls, is not capable of predicting the behavior of steel shear walls with length-to-thickness ratio less than about 600 which is the range most common in buildings. The main reasons for such shortcomings of the AISC seismic design provisions for steel shear walls is that it ignores the compression field in the shear walls, which can be significant in typical shear walls. The AISC method also is not capable of incorporating stresses in the shear wall due to overturning moments. A more rational seismic design procedure for design of shear walls proposed in 2000 by the author is summarized in the paper. The design method, based on procedures used for design of steel plate girders, takes into account both tension and compression stress fields and is applicable to all values of length-to-thickness ratios of steel shear walls. The method is also capable of including the effect of overturning moments and any normal forces that might act on the steel shear wall.
Research on the spatial analysis method of seismic hazard for island
NASA Astrophysics Data System (ADS)
Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying
2017-05-01
Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.
Viability of using seismic data to predict hydrogeological parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mela, K.
1997-10-01
Design of modem contaminant mitigation and fluid extraction projects make use of solutions from stochastic hydrogeologic models. These models rely heavily on the hydraulic parameters of hydraulic conductivity and the correlation length of hydraulic conductivity. Reliable values of these parameters must be acquired to successfully predict flow of fluids through the aquifer of interest. An inexpensive method of acquiring these parameters by use of seismic reflection surveying would be beneficial. Relationships between seismic velocity and porosity together with empirical observations relating porosity to permeability may lead to a method of extracting the correlation length of hydraulic conductivity from shallow highmore » resolution seismic data making the use of inexpensive high density data sets commonplace for these studies.« less
NASA Astrophysics Data System (ADS)
Köktan, Utku; Demir, Gökhan; Kerem Ertek, M.
2017-04-01
The earthquake behavior of retaining walls is commonly calculated with pseudo static approaches based on Mononobe-Okabe method. The seismic ground pressure acting on the retaining wall by the Mononobe-Okabe method does not give a definite idea of the distribution of the seismic ground pressure because it is obtained by balancing the forces acting on the active wedge behind the wall. With this method, wave propagation effects and soil-structure interaction are neglected. The purpose of this study is to examine the earthquake behavior of a retaining wall taking into account the soil-structure interaction. For this purpose, time history seismic analysis of the soil-structure interaction system using finite element method has been carried out considering 3 different soil conditions. Seismic analysis of the soil-structure model was performed according to the earthquake record of "1971, San Fernando Pacoima Dam, 196 degree" existing in the library of MIDAS GTS NX software. The results obtained from the analyses show that the soil-structure interaction is very important for the seismic design of a retaining wall. Keywords: Soil-structure interaction, Finite element model, Retaining wall
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Centrifuge modeling of rocking-isolated inelastic RC bridge piers
Loli, Marianna; Knappett, Jonathan A; Brown, Michael J; Anastasopoulos, Ioannis; Gazetas, George
2014-01-01
Experimental proof is provided of an unconventional seismic design concept, which is based on deliberately underdesigning shallow foundations to promote intense rocking oscillations and thereby to dramatically improve the seismic resilience of structures. Termed rocking isolation, this new seismic design philosophy is investigated through a series of dynamic centrifuge experiments on properly scaled models of a modern reinforced concrete (RC) bridge pier. The experimental method reproduces the nonlinear and inelastic response of both the soil-footing interface and the structure. To this end, a novel scale model RC (1:50 scale) that simulates reasonably well the elastic response and the failure of prototype RC elements is utilized, along with realistic representation of the soil behavior in a geotechnical centrifuge. A variety of seismic ground motions are considered as excitations. They result in consistent demonstrably beneficial performance of the rocking-isolated pier in comparison with the one designed conventionally. Seismic demand is reduced in terms of both inertial load and deck drift. Furthermore, foundation uplifting has a self-centering potential, whereas soil yielding is shown to provide a particularly effective energy dissipation mechanism, exhibiting significant resistance to cumulative damage. Thanks to such mechanisms, the rocking pier survived, with no signs of structural distress, a deleterious sequence of seismic motions that caused collapse of the conventionally designed pier. © 2014 The Authors Earthquake Engineering & Structural Dynamics Published by John Wiley & Sons Ltd. PMID:26300573
Centrifuge modeling of rocking-isolated inelastic RC bridge piers.
Loli, Marianna; Knappett, Jonathan A; Brown, Michael J; Anastasopoulos, Ioannis; Gazetas, George
2014-12-01
Experimental proof is provided of an unconventional seismic design concept, which is based on deliberately underdesigning shallow foundations to promote intense rocking oscillations and thereby to dramatically improve the seismic resilience of structures. Termed rocking isolation , this new seismic design philosophy is investigated through a series of dynamic centrifuge experiments on properly scaled models of a modern reinforced concrete (RC) bridge pier. The experimental method reproduces the nonlinear and inelastic response of both the soil-footing interface and the structure. To this end, a novel scale model RC (1:50 scale) that simulates reasonably well the elastic response and the failure of prototype RC elements is utilized, along with realistic representation of the soil behavior in a geotechnical centrifuge. A variety of seismic ground motions are considered as excitations. They result in consistent demonstrably beneficial performance of the rocking-isolated pier in comparison with the one designed conventionally. Seismic demand is reduced in terms of both inertial load and deck drift. Furthermore, foundation uplifting has a self-centering potential, whereas soil yielding is shown to provide a particularly effective energy dissipation mechanism, exhibiting significant resistance to cumulative damage. Thanks to such mechanisms, the rocking pier survived, with no signs of structural distress, a deleterious sequence of seismic motions that caused collapse of the conventionally designed pier. © 2014 The Authors Published by John Wiley & Sons Ltd.
DOT National Transportation Integrated Search
1997-05-01
This document presents a series of five design examples illustrating the principles and methods of geotechnical earthquake engineering and seismic design for highway facilities. These principles and methods are described in Volume I - Design Principl...
Permafrost Active Layer Seismic Interferometry Experiment (PALSIE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Robert; Knox, Hunter Anne; James, Stephanie
2016-01-01
We present findings from a novel field experiment conducted at Poker Flat Research Range in Fairbanks, Alaska that was designed to monitor changes in active layer thickness in real time. Results are derived primarily from seismic data streaming from seven Nanometric Trillium Posthole seismometers directly buried in the upper section of the permafrost. The data were evaluated using two analysis methods: Horizontal to Vertical Spectral Ratio (HVSR) and ambient noise seismic interferometry. Results from the HVSR conclusively illustrated the method's effectiveness at determining the active layer's thickness with a single station. Investigations with the multi-station method (ambient noise seismic interferometry)more » are continuing at the University of Florida and have not yet conclusively determined active layer thickness changes. Further work continues with the Bureau of Land Management (BLM) to determine if the ground based measurements can constrain satellite imagery, which provide measurements on a much larger spatial scale.« less
ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Blanford; E. Keldrauk; M. Laufer
2010-09-20
Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement,more » and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using factory prefabricated structural modules, for application to external event shell and base isolated structures.« less
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
NASA Astrophysics Data System (ADS)
Zhao, J. K.; Xu, X. S.
2017-11-01
The cutting off column and jacking technology is a method for increasing story height, which has been widely used and paid much attention in engineering. The stiffness will be changed after the process of cutting off column and jacking, which directly affects the overall seismic performance. It is usually necessary to take seismic strengthening measures to enhance the stiffness. A five story frame structure jacking project in Jinan High-tech Zone was taken as an example, and three finite element models were established which contains the frame model before lifting, after lifting and after strengthening. Based on the stiffness, the dynamic time-history analysis was carried out to research its seismic performance under the EL-Centro seismic wave, the Taft seismic wave and the Tianjin artificial seismic wave. The research can provide some guidance for the design and construction of the entire jack lifting structure.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
NASA Astrophysics Data System (ADS)
Matuła, Rafał; Lewińska, Paulina
2018-01-01
This paper revolves around newly designed and constructed system that can make 2D seismic measurement in natural, subsoil conditions and role of land survey in obtaining accurate results and linking them to 3D surface maps. A new type of land streamer, designed for shallow subsurface exploration is described in this paper. In land seismic data acquisition methods a vehicle tows a line of seismic cable, lying on construction called streamer. The measurements of points and shots are taken while the line is stationary, arbitrary placed on seismic profile. Exposed land streamer consists of 24 innovatory gimballed 10 Hz geophones. It eliminates the need for hand `planting' of geophones, reducing time and costs. With the use of current survey techniques all data obtained with this instrument are being transferred in to 2D and 3D maps. This process is becoming more automatic.
NASA Astrophysics Data System (ADS)
Jiang, T.; Yue, Y.
2017-12-01
It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.
Seismic Response Control Of Structures Using Semi-Active and Passive Variable Stiffness Devices
NASA Astrophysics Data System (ADS)
Salem, Mohamed M. A.
Controllable devices such as Magneto-Rheological Fluid Dampers, Electro-Rheological Dampers, and controllable friction devices have been studied extensively with limited implementation in real structures. Such devices have shown great potential in reducing seismic demands, either as smart base isolation systems, or as smart devices for multistory structures. Although variable stiffness devices can be used for seismic control of structures, the vast majority of research effort has been given to the control of damping. The primary focus of this dissertation is to evaluate the seismic control of structures using semi-active and passive variable stiffness characteristics. Smart base isolation systems employing variable stiffness devices have been studied, and two semi-active control strategies are proposed. The control algorithms were designed to reduce the superstructure and base accelerations of seismically isolated structures subject to near-fault and far-field ground motions. Computational simulations of the proposed control algorithms on the benchmark structure have shown that excessive base displacements associated with the near-fault ground motions may be better mitigated with the use of variable stiffness devices. However, the device properties must be controllable to produce a wide range of stiffness changes for an effective control of the base displacements. The potential of controllable stiffness devices in limiting the base displacement due to near-fault excitation without compromising the performance of conventionally isolated structures, is illustrated. The application of passive variable stiffness devices for seismic response mitigation of multistory structures is also investigated. A stiffening bracing system (SBS) is proposed to replace the conventional bracing systems of braced frames. An optimization process for the SBS parameters has been developed. The main objective of the design process is to maintain a uniform inter-story drift angle over the building's height, which in turn would evenly distribute the seismic demand over the building. This behavior is particularly essential so that any possible damage is not concentrated in a single story. Furthermore, the proposed design ensures that additional damping devices distributed over the building's height work efficiently with their maximum design capacity, leading to a cost efficient design. An integrated and comprehensive design procedure that can be readily adopted by the current seismic design codes is proposed. An equivalent lateral force distribution is developed that shows a good agreement with the response history analyses in terms of seismic performance and demand prediction. This lateral force pattern explicitly accounts for the higher mode effect, the dynamic characteristics of the structure, the supplemental damping, and the site specific seismic hazard. Therefore, the proposed design procedure is considered as a standalone method for the design of SBS equipped buildings.
Intelligent seismic risk mitigation system on structure building
NASA Astrophysics Data System (ADS)
Suryanita, R.; Maizir, H.; Yuniorto, E.; Jingga, H.
2018-01-01
Indonesia located on the Pacific Ring of Fire, is one of the highest-risk seismic zone in the world. The strong ground motion might cause catastrophic collapse of the building which leads to casualties and property damages. Therefore, it is imperative to properly design the structural response of building against seismic hazard. Seismic-resistant building design process requires structural analysis to be performed to obtain the necessary building responses. However, the structural analysis could be very difficult and time consuming. This study aims to predict the structural response includes displacement, velocity, and acceleration of multi-storey building with the fixed floor plan using Artificial Neural Network (ANN) method based on the 2010 Indonesian seismic hazard map. By varying the building height, soil condition, and seismic location in 47 cities in Indonesia, 6345 data sets were obtained and fed into the ANN model for the learning process. The trained ANN can predict the displacement, velocity, and acceleration responses with up to 96% of predicted rate. The trained ANN architecture and weight factors were later used to build a simple tool in Visual Basic program which possesses the features for prediction of structural response as mentioned previously.
Structural vibration passive control and economic analysis of a high-rise building in Beijing
NASA Astrophysics Data System (ADS)
Chen, Yongqi; Cao, Tiezhu; Ma, Liangzhe; Luo, Chaoying
2009-12-01
Performance analysis of the Pangu Plaza under earthquake and wind loads is described in this paper. The plaza is a 39-story steel high-rise building, 191 m high, located in Beijing close to the 2008 Olympic main stadium. It has both fluid viscous dampers (FVDs) and buckling restrained braces or unbonded brace (BRB or UBB) installed. A repeated iteration procedure in its design and analysis was adopted for optimization. Results from the seismic response analysis in the horizontal and vertical directions show that the FVDs are highly effective in reducing the response of both the main structure and the secondary system. A comparative analysis of structural seismic performance and economic impact was conducted using traditional methods, i.e., increased size of steel columns and beams and/or use of an increased number of seismic braces versus using FVD. Both the structural response and economic analysis show that using FVD to absorb seismic energy not only satisfies the Chinese seismic design code for a “rare” earthquake, but is also the most economical way to improve seismic performance both for one-time direct investment and long term maintenance.
NASA Astrophysics Data System (ADS)
Martínez, K.; Mendoza, J. A.; Colberg-Larsen, J.; Ploug, C.
2009-05-01
Near surface geophysics applications are gaining more widespread use in geotechnical and engineering projects. The development of data acquisition, processing tools and interpretation methods have optimized survey time, reduced logistics costs and increase results reliability of seismic surveys during the last decades. However, the use of wide-scale geophysical methods under urban environments continues to face great challenges due to multiple noise sources and obstacles inherent to cities. A seismic pre-investigation was conducted to investigate the feasibility of using seismic methods to obtain information about the subsurface layer locations and media properties in Copenhagen. Such information is needed for hydrological, geotechnical and groundwater modeling related to the Cityringen underground metro project. The pre-investigation objectives were to validate methods in an urban environment and optimize field survey procedures, processing and interpretation methods in urban settings in the event of further seismic investigations. The geological setting at the survey site is characterized by several interlaced layers of clay, till and sand. These layers are found unevenly distributed throughout the city and present varying thickness, overlaying several different unit types of limestone at shallow depths. Specific results objectives were to map the bedrock surface, ascertain a structural geological framework and investigate bedrock media properties relevant to the construction design. The seismic test consisted of a combined seismic reflection and refraction analyses of a profile line conducted along an approximately 1400 m section in the northern part of Copenhagen, along the projected metro city line. The data acquisition was carried out using a 192 channels array, receiver groups with 5 m spacing and a Vibroseis as a source at 10 m spacing. Complementarily, six vertical seismic profiles (VSP) were performed at boreholes located along the line. The reflection data underwent standard interpretation and the refraction included wavepath Eikonal traveltime tomography. The reflection results indicate the presence of horizontal reflectors with discontinuities likely related to deep lying structural features in deeper lying chalk layers. The refraction interpretation allowed the identification of the upper limestone surface, relevant to map for tunneling design. The VSP provided additional information regarding limestone quality and provided correlation data for improved refraction interpretation. In general, the pre-investigation results demonstrated that it is possible to image the limestone surface using the seismic method. The satisfactory results lead to the implementation of a 15 km survey planned during the spring 2009. The survey will combine reflection, refraction, walkaway-VSP and electrical resistivity tomography (ERT). The authors wish to acknowledge Metroselskabet I/S for permission in presenting the preliminary results and the Cityringen Joint Venture partners Arup and Systra.
NASA Astrophysics Data System (ADS)
Fang, Yi; Huang, Yahong
2017-12-01
Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.
Building configuration and seismic design: The architecture of earthquake resistance
NASA Astrophysics Data System (ADS)
Arnold, C.; Reitherman, R.; Whitaker, D.
1981-05-01
The architecture of a building in relation to its ability to withstand earthquakes was determined. Aspects of round motion which are significant to building behavior are discussed. Results of a survey of configuration decisions that affect the performance of buildings with a focus on the architectural aspects of configuration design are provided. Configuration derivation, building type as it relates to seismic design, and seismic design, and seismic issues in the design process are examined. Case studies of the Veterans' Administration Hospital in Loma Linda, California, and the Imperial Hotel in Tokyo, Japan, are presented. The seismic design process is described paying special attention to the configuration issues. The need is stressed for guidelines, codes, and regulations to ensure design solutions that respect and balance the full range of architectural, engineering, and material influences on seismic hazards.
NASA Astrophysics Data System (ADS)
Luo, D.; Cai, F.
2017-12-01
Small-scale and high-resolution marine sparker multi-channel seismic surveys using large energy sparkers are characterized by a high dominant frequency of the seismic source, wide bandwidth, and a high resolution. The technology with a high-resolution and high-detection precision was designed to improve the imaging quality of shallow sedimentary. In the study, a 20KJ sparker and 24-channel streamer cable with a 6.25m group interval were used as a seismic source and receiver system, respectively. Key factors for seismic imaging of gas hydrate are enhancement of S/N ratio, amplitude compensation and detailed velocity analysis. However, the data in this study has some characteristics below: 1. Small maximum offsets are adverse to velocity analysis and multiple attenuation. 2. Lack of low frequency information, that is, information less than 100Hz are invisible. 3. Low S/N ratio since less coverage times (only 12 times). These characteristics make it difficult to reach the targets of seismic imaging. In the study, the target processing methods are used to improve the seismic imaging quality of gas hydrate. First, some technologies of noise suppression are combined used in pre-stack seismic data to suppression of seismic noise and improve the S/N ratio. These technologies including a spectrum sharing noise elimination method, median filtering and exogenous interference suppression method. Second, the combined method of three technologies including SRME, τ-p deconvolution and high precision Radon transformation is used to remove multiples. Third, accurate velocity field are used in amplitude energy compensation to highlight the Bottom Simulating Reflector (short for BSR, the indicator of gas hydrates) and gas migration pathways (such as gas chimneys, hot spots et al.). Fourth, fine velocity analysis technology are used to improve accuracy of velocity analysis. Fifth, pre-stack deconvolution processing technology is used to compensate for low frequency energy and suppress of ghost, thus formation reflection characteristics are highlighted. The result shows that the small-scale and high resolution marine sparker multi-channel seismic surveys are very effective in improving the resolution and quality of gas hydrate imaging than the conventional seismic acquisition technology.
Earthquake design criteria for small hydro projects in the Philippines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, P.P.; McCandless, D.H.; Asce, M.
1995-12-31
The definition of the seismic environment and seismic design criteria of more than twenty small hydro projects in the northern part of the island of Luzon in the Philippines took a special urgency on the wake of the Magnitude 7.7 earthquake that shook the island on July 17, 1990. The paper describes the approach followed to determine design shaking level criteria at each hydro site consistent with the seismic environment estimated at that same site. The approach consisted of three steps: (1) Seismicity: understanding the mechanisms and tectonic features susceptible to generate seismicity and estimating the associated seismicity levels, (2)more » Seismic Hazard: in the absence of an accurate historical record, using statistics to determine the expected level of ground shaking at a site during the operational 100-year design life of each Project, and (3) Criteria Selection: finally and most importantly, exercising judgment in estimating the final proposed level of shaking at each site. The resulting characteristics of estimated seismicity and seismic hazard and the proposed final earthquake design criteria are provided.« less
Seismic Response Analysis of an Unanchored Steel Tank under Horizontal Excitation
NASA Astrophysics Data System (ADS)
Rulin, Zhang; Xudong, Cheng; Youhai, Guan
2017-06-01
The seismic performance of liquid storage tank affects the safety of people’s life and property. A 3-D finite element method (FEM) model of storage tank is established, which considers the liquid-solid coupling effect. Then, the displacement and stress distribution along the tank wall is studied under El Centro earthquake. Results show that, large amplitude sloshing with long period appears on liquid surface. The elephant-foot deformation occurs near the tank bottom, and at the elephant-foot deformation position maximum hoop stress and axial stress appear. The maximum axial compressive stress is very close to the allowable critical stress calculated by the design code, and may be local buckling failure occurs. The research can provide some reference for the seismic design of storage tanks.
IMPLEMENTATION OF THE SEISMIC DESIGN CRITERIA OF DOE-STD-1189-2008 APPENDIX A [FULL PAPER
DOE Office of Scientific and Technical Information (OSTI.GOV)
OMBERG SK
2008-05-14
This paper describes the approach taken by two Fluor Hanford projects for implementing of the seismic design criteria from DOE-STD-1189-2008, Appendix A. The existing seismic design criteria and the new seismic design criteria is described, and an assessment of the primary differences provided. The gaps within the new system of seismic design criteria, which necessitate conduct of portions of work to the existing technical standards pending availability of applicable industry standards, is discussed. Two Hanford Site projects currently in the Control Decision (CD)-1 phase of design have developed an approach to implementation of the new criteria. Calculations have been performedmore » to determine the seismic design category for one project, based on information available in early CD-1. The potential effects of DOE-STD-1189-2008, Appendix A seismic design criteria on the process of project alternatives analysis is discussed. Present of this work is expected to benefit others in the DOE Complex that may be implementing DOE-STD-1189-2008.« less
Implied preference for seismic design level and earthquake insurance.
Goda, K; Hong, H P
2008-04-01
Seismic risk can be reduced by implementing newly developed seismic provisions in design codes. Furthermore, financial protection or enhanced utility and happiness for stakeholders could be gained through the purchase of earthquake insurance. If this is not so, there would be no market for such insurance. However, perceived benefit associated with insurance is not universally shared by stakeholders partly due to their diverse risk attitudes. This study investigates the implied seismic design preference with insurance options for decisionmakers of bounded rationality whose preferences could be adequately represented by the cumulative prospect theory (CPT). The investigation is focused on assessing the sensitivity of the implied seismic design preference with insurance options to model parameters of the CPT and to fair and unfair insurance arrangements. Numerical results suggest that human cognitive limitation and risk perception can affect the implied seismic design preference by the CPT significantly. The mandatory purchase of fair insurance will lead the implied seismic design preference to the optimum design level that is dictated by the minimum expected lifecycle cost rule. Unfair insurance decreases the expected gain as well as its associated variability, which is preferred by risk-averse decisionmakers. The obtained results of the implied preference for the combination of the seismic design level and insurance option suggest that property owners, financial institutions, and municipalities can take advantage of affordable insurance to establish successful seismic risk management strategies.
Deghosting based on the transmission matrix method
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong
2017-12-01
As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.
NASA Astrophysics Data System (ADS)
Ashraf Mohamad Ismail, Mohd; Ng, Soon Min; Hazreek Zainal Abidin, Mohd; Madun, Aziman
2018-04-01
The application of geophysical seismic refraction for slope stabilization design using soil nailing method was demonstrated in this study. The potential weak layer of the study area is first identify prior to determining the appropriate length and location of the soil nail. A total of 7 seismic refraction survey lines were conducted at the study area with standard procedures. The refraction data were then analyzed by using the Pickwin and Plotrefa computer software package to obtain the seismic velocity profiles distribution. These results were correlated with the complementary borehole data to interpret the subsurface profile of the study area. It has been identified that layer 1 to 3 is the potential weak zone susceptible to slope failure. Hence, soil nails should be installed to transfer the tensile load from the less stable layer 3 to the more stable layer 4. The soil-nail interaction will provide a reinforcing action to the soil mass thereby increasing the stability of the slope.
Seismic Vulnerability and Performance Level of confined brick walls
NASA Astrophysics Data System (ADS)
Ghalehnovi, M.; Rahdar, H. A.
2008-07-01
There has been an increase on the interest of Engineers and designers to use designing methods based on displacement and behavior (designing based on performance) Regarding to the importance of resisting structure design against dynamic loads such as earthquake, and inability to design according to prediction of nonlinear behavior element caused by nonlinear properties of constructional material. Economically speaking, easy carrying out and accessibility of masonry material have caused an enormous increase in masonry structures in villages, towns and cities. On the other hand, there is a necessity to study behavior and Seismic Vulnerability in these kinds of structures since Iran is located on the earthquake belt of Alpide. Different reasons such as environmental, economic, social, cultural and accessible constructional material have caused different kinds of constructional structures. In this study, some tied walls have been modeled with software and with relevant accelerator suitable with geology conditions under dynamic analysis to research on the Seismic Vulnerability and performance level of confined brick walls. Results from this analysis seem to be satisfactory after comparison of them with the values in Code ATC40, FEMA and standard 2800 of Iran.
Re-evaluation and updating of the seismic hazard of Lebanon
NASA Astrophysics Data System (ADS)
Huijer, Carla; Harajli, Mohamed; Sadek, Salah
2016-01-01
This paper presents the results of a study undertaken to evaluate the implications of the newly mapped offshore Mount Lebanon Thrust (MLT) fault system on the seismic hazard of Lebanon and the current seismic zoning and design parameters used by the local engineering community. This re-evaluation is critical, given that the MLT is located at close proximity to the major cities and economic centers of the country. The updated seismic hazard was assessed using probabilistic methods of analysis. The potential sources of seismic activities that affect Lebanon were integrated along with any/all newly established characteristics within an updated database which includes the newly mapped fault system. The earthquake recurrence relationships of these sources were developed from instrumental seismology data, historical records, and earlier studies undertaken to evaluate the seismic hazard of neighboring countries. Maps of peak ground acceleration contours, based on 10 % probability of exceedance in 50 years (as per Uniform Building Code (UBC) 1997), as well as 0.2 and 1 s peak spectral acceleration contours, based on 2 % probability of exceedance in 50 years (as per International Building Code (IBC) 2012), were also developed. Finally, spectral charts for the main coastal cities of Beirut, Tripoli, Jounieh, Byblos, Saida, and Tyre are provided for use by designers.
NASA Astrophysics Data System (ADS)
Abdel Raheem, Shehata E.; Ahmed, Mohamed M.; Alazrak, Tarek M. A.
2015-03-01
Soil conditions have a great deal to do with damage to structures during earthquakes. Hence the investigation on the energy transfer mechanism from soils to buildings during earthquakes is critical for the seismic design of multi-story buildings and for upgrading existing structures. Thus, the need for research into soil-structure interaction (SSI) problems is greater than ever. Moreover, recent studies show that the effects of SSI may be detrimental to the seismic response of structure and neglecting SSI in analysis may lead to un-conservative design. Despite this, the conventional design procedure usually involves assumption of fixity at the base of foundation neglecting the flexibility of the foundation, the compressibility of the underneath soil and, consequently, the effect of foundation settlement on further redistribution of bending moment and shear force demands. Hence the SSI analysis of multi-story buildings is the main focus of this research; the effects of SSI are analyzed for typical multi-story building resting on raft foundation. Three methods of analysis are used for seismic demands evaluation of the target moment-resistant frame buildings: equivalent static load; response spectrum methods and nonlinear time history analysis with suit of nine time history records. Three-dimensional FE model is constructed to investigate the effects of different soil conditions and number of stories on the vibration characteristics and seismic response demands of building structures. Numerical results obtained using SSI model with different soil conditions are compared to those corresponding to fixed-base support modeling assumption. The peak responses of story shear, story moment, story displacement, story drift, moments at beam ends, as well as force of inner columns are analyzed. The results of different analysis approaches are used to evaluate the advantages, limitations, and ease of application of each approach for seismic analysis.
DOT National Transportation Integrated Search
2011-01-01
The need to maintain the functionality of critical transportation lifelines after a large seismic event motivates the : strategy to design certain bridges for performance standards beyond the minimum required by bridge design codes. : To design a bri...
Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach
NASA Astrophysics Data System (ADS)
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.
2015-04-01
Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.
Seismic performance of geosynthetic-soil retaining wall structures
NASA Astrophysics Data System (ADS)
Zarnani, Saman
Vertical inclusions of expanded polystyrene (EPS) placed behind rigid retaining walls were investigated as geofoam seismic buffers to reduce earthquake-induced loads. A numerical model was developed using the program FLAC and the model validated against 1-g shaking table test results of EPS geofoam seismic buffer models. Two constitutive models for the component materials were examined: elastic-perfectly plastic with Mohr-Coulomb (M-C) failure criterion and non-linear hysteresis damping model with equivalent linear method (ELM) approach. It was judged that the M-C model was sufficiently accurate for practical purposes. The mechanical property of interest to attenuate dynamic loads using a seismic buffer was the buffer stiffness defined as K = E/t (E = buffer elastic modulus, t = buffer thickness). For the range of parameters investigated in this study, K ≤50 MN/m3 was observed to be the practical range for the optimal design of these systems. Parametric numerical analyses were performed to generate design charts that can be used for the preliminary design of these systems. A new high capacity shaking table facility was constructed at RMC that can be used to study the seismic performance of earth structures. Reduced-scale models of geosynthetic reinforced soil (GRS) walls were built on this shaking table and then subjected to simulated earthquake loading conditions. In some shaking table tests, combined use of EPS geofoam and horizontal geosynthetic reinforcement layers was investigated. Numerical models were developed using program FLAC together with ELM and M-C constitutive models. Physical and numerical results were compared against predicted values using analysis methods found in the journal literature and in current North American design guidelines. The comparison shows that current Mononobe-Okabe (M-O) based analysis methods could not consistently satisfactorily predict measured reinforcement connection load distributions at all elevations under both static and dynamic loading conditions. The results from GRS model wall tests with combined EPS geofoam and geosynthetic reinforcement layers show that the inclusion of a EPS geofoam layer behind the GRS wall face can reduce earth loads acting on the wall facing to values well below those recorded for conventional GRS wall model configurations.
49 CFR 41.117 - Buildings built with Federal assistance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... architect's authenticated verifications of seismic design codes, standards, and practices used in the design... financial assistance, after July 14, 1993 must be designed and constructed in accord with seismic standards... of compliance with the seismic design and construction requirements of this part is required prior to...
Recent Seismicity in Texas and Research Design and Progress of the TexNet-CISR Collaboration
NASA Astrophysics Data System (ADS)
Hennings, P.; Savvaidis, A.; Rathje, E.; Olson, J. E.; DeShon, H. R.; Datta-Gupta, A.; Eichhubl, P.; Nicot, J. P.; Kahlor, L. A.
2017-12-01
The recent increase in the rate of seismicity in Texas has prompted the establishment of an interdisciplinary, interinstitutional collaboration led by the Texas Bureau of Economic Geology which includes the TexNet Seismic Monitoring and Research project as funded by The State of Texas (roughly 2/3rds of our funding) and the industry-funded Center for Integrated Seismicity Research (CISR) (1/3 of funding). TexNet is monitoring and cataloging seismicity across Texas using a new backbone seismic network, investigating site-specific earthquake sequences by deploying temporary seismic monitoring stations, and conducting reservoir modeling studies. CISR expands TexNet research into the interdisciplinary realm to more thoroughly study the factors that contribute to seismicity, characterize the associated hazard and risk, develop strategies for mitigation and management, and develop methods of effective communication for all stakeholders. The TexNet-CISR research portfolio has 6 themes: seismicity monitoring, seismology, geologic and hydrologic description, geomechanics and reservoir modeling, seismic hazard and risk assessment, and seismic risk social science. Twenty+ specific research projects span and connect these themes. We will provide a synopsis of research progress including recent seismicity trends in Texas; Fort Worth Basin integrated studies including geological modeling and fault characterization, fluid injection data syntheses, and reservoir and geomechanical modeling; regional ground shaking characterization and mapping, infrastructure vulnerability assessment; and social science topics of public perception and information seeking behavior.
Fluid-structure interaction in fast breeder reactors
NASA Astrophysics Data System (ADS)
Mitra, A. A.; Manik, D. N.; Chellapandi, P. A.
2004-05-01
A finite element model for the seismic analysis of a scaled down model of Fast breeder reactor (FBR) main vessel is proposed to be established. The reactor vessel, which is a large shell structure with a relatively thin wall, contains a large volume of sodium coolant. Therefore, the fluid structure interaction effects must be taken into account in the seismic design. As part of studying fluid-structure interaction, the fundamental frequency of vibration of a circular cylindrical shell partially filled with a liquid has been estimated using Rayleigh's method. The bulging and sloshing frequencies of the first four modes of the aforementioned system have been estimated using the Rayleigh-Ritz method. The finite element formulation of the axisymmetric fluid element with Fourier option (required due to seismic loading) is also presented.
Determining the effective system damping of highway bridges.
DOT National Transportation Integrated Search
2009-06-01
This project investigates four methods for modeling modal damping ratios of short-span and isolated : concrete bridges subjected to strong ground motion, which can be used for bridge seismic analysis : and design based on the response spectrum method...
Verification/development of seismic design specifications for downstate zone.
DOT National Transportation Integrated Search
2014-07-01
The New York City Department of Transportation (NYCDOT) Seismic Design Guidelines Report was : updated in September 2008 by Weidlinger Associates to reflect current state-of-the-art knowledge. The : NYCDOT seismic design guidelines are for use in the...
NASA Astrophysics Data System (ADS)
Longobardi, M.; Bustin, A. M. M.; Johansen, K.; Bustin, R. M.
2017-12-01
One of our goals is to investigate the variables and processes controlling the anomalous induced seismicity and its associated ground motions, to better understand the anomalous induced seismicity (AIS) due to hydraulic fracturing in Northeast British Columbia. Our other main objective is to optimize-completions and well design. Although the vast majority of earthquakes that occur in the world each year have natural causes, some of these earthquakes and a number of lesser magnitude seismic events are induced by human activities. The recorded induced seismicity resulting from the fluid injection during hydraulic fracturing is generally small in magnitude (< M 1). Shale gas operations in Northeast British Columbia (BC) have induced the largest recorded occurrence and magnitude of AIS because of hydraulic fracturing. Anomalous induced seismicity have been recorded in seven clusters within the Montney area, with magnitudes up to ML 4.6. Five of these clusters have been linked to hydraulic fracturing. To analyse our AIS data, we first have calculated the earthquakes hypocenters. The data was recorded on an array of real-time accelerometers. We built the array based on our modified design from the early earthquake detectors installed in BC schools for the Earthquake Early Warning System for British Columbia. We have developed a new technique for locating hypocenters and applied it to our dataset. The technique will enable near real-time event location, aiding in both mitigating induced events and adjusting completions to optimize the stimulation. Our hypocenter program assumes to consider a S wave speed, fitting the arrival times to the hypocenter, and using an "amoebae method" multivariate. We have used this method because it is well suited to minimizing of the chi-squared function of the arrival time deviation. We show some preliminary results on the Montney dataset.
Assessment of seismic design response factors of concrete wall buildings
NASA Astrophysics Data System (ADS)
Mwafy, Aman
2011-03-01
To verify the seismic design response factors of high-rise buildings, five reference structures, varying in height from 20- to 60-stories, were selected and designed according to modern design codes to represent a wide range of concrete wall structures. Verified fiber-based analytical models for inelastic simulation were developed, considering the geometric nonlinearity and material inelasticity of the structural members. The ground motion uncertainty was accounted for by employing 20 earthquake records representing two seismic scenarios, consistent with the latest understanding of the tectonic setting and seismicity of the selected reference region (UAE). A large number of Inelastic Pushover Analyses (IPAs) and Incremental Dynamic Collapse Analyses (IDCAs) were deployed for the reference structures to estimate the seismic design response factors. It is concluded that the factors adopted by the design code are adequately conservative. The results of this systematic assessment of seismic design response factors apply to a wide variety of contemporary concrete wall buildings with various characteristics.
EMERALD: Coping with the Explosion of Seismic Data
NASA Astrophysics Data System (ADS)
West, J. D.; Fouch, M. J.; Arrowsmith, R.
2009-12-01
The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.
Field test investigation of high sensitivity fiber optic seismic geophone
NASA Astrophysics Data System (ADS)
Wang, Meng; Min, Li; Zhang, Xiaolei; Zhang, Faxiang; Sun, Zhihui; Li, Shujuan; Wang, Chang; Zhao, Zhong; Hao, Guanghu
2017-10-01
Seismic reflection, whose measured signal is the artificial seismic waves ,is the most effective method and widely used in the geophysical prospecting. And this method can be used for exploration of oil, gas and coal. When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of geophones at the surface. Compared to traditional geophones such as electric, magnetic, mechanical and gas geophone, optical fiber geophones have many advantages. Optical fiber geophones can achieve sensing and signal transmission simultaneously. With the development of fiber grating sensor technology, fiber bragg grating (FBG) is being applied in seismic exploration and draws more and more attention to its advantage of anti-electromagnetic interference, high sensitivity and insensitivity to meteorological conditions. In this paper, we designed a high sensitivity geophone and tested its sensitivity, based on the theory of FBG sensing. The frequency response range is from 10 Hz to 100 Hz and the acceleration of the fiber optic seismic geophone is over 1000pm/g. sixteen-element fiber optic seismic geophone array system is presented and the field test is performed in Shengli oilfield of China. The field test shows that: (1) the fiber optic seismic geophone has a higher sensitivity than the traditional geophone between 1-100 Hz;(2) The low frequency reflection wave continuity of fiber Bragg grating geophone is better.
NASA Astrophysics Data System (ADS)
Dou, S.; Wood, T.; Lindsey, N.; Ajo Franklin, J. B.; Freifeld, B. M.; Gelvin, A.; Morales, A.; Saari, S.; Ekblaw, I.; Wagner, A. M.; Daley, T. M.; Robertson, M.; Martin, E. R.; Ulrich, C.; Bjella, K.
2016-12-01
Thawing of permafrost can cause ground deformations that threaten the integrity of civil infrastructure. It is essential to develop early warning systems that can identify critically warmed permafrost and issue warnings for hazard prevention and control. Seismic methods can play a pivotal role in such systems for at least two reasons: First, seismic velocities are indicative of mechanical strength of the subsurface and thus are directly relevant to engineering properties; Second, seismic velocities in permafrost systems are sensitive to pre-thaw warming, which makes it possible to issue early warnings before the occurrence of hazardous subsidence events. However, several questions remain: What are the seismic signatures that can be effectively used for early warning of permafrost thaw? Can seismic methods provide enough warning times for hazard prevention and control? In this study, we investigate the feasibility of using permanently installed seismic networks for early warnings of permafrost thaw. We conducted continuous active-source seismic monitoring of permafrost that was under controlled heating at CRREL's Fairbanks permafrost experiment station. We used a permanently installed surface orbital vibrator (SOV) as source and surface-trenched DAS arrays as receivers. The SOV is characterized by its excellent repeatability, automated operation, high energy level, and the rich frequency content (10-100 Hz) of the generated wavefields. The fiber-optic DAS arrays allow continuous recording of seismic data with dense spatial sampling (1-meter channel spacing), low cost, and low maintenance. This combination of SOV-DAS provides unique seismic datasets for observing time-lapse changes of warming permafrost at the field scale, hence providing an observational basis for design and development of early warning systems for permafrost thaw.
Dual Roadside Seismic Sensor for Moving Road Vehicle Detection and Characterization
Wang, Hua; Quan, Wei; Wang, Yinhai; Miller, Gregory R.
2014-01-01
This paper presents a method for using a dual roadside seismic sensor to detect moving vehicles on roadway by installing them on a road shoulder. Seismic signals are split into fixed time intervals in recording. In each interval, the time delay of arrival (TDOA) is estimated using a generalized cross-correlation approach with phase transform (GCC-PHAT). Various kinds of vehicle characterization information, including vehicle speed, axle spacing, detection of both vehicle axles and moving direction, can also be extracted from the collected seismic signals as demonstrated in this paper. The error of both vehicle speed and axle spacing detected by this approach has been shown to be less than 20% through the field tests conducted on an urban street in Seattle. Compared to most existing sensors, this new design of dual seismic sensor is cost effective, easy to install, and effective in gathering information for various traffic management applications. PMID:24526304
Influence of the new LRFD seismic guidelines on the design of bridges in Virginia.
DOT National Transportation Integrated Search
2004-01-01
The Virginia Department of Transportation is currently using the AASHTO Standard Specifications for Highway Bridges, with some modifications, for its seismic highway bridge design. In April 2001, the Recommended LRFD Guidelines for the Seismic Design...
Frozen soil lateral resistance for the seismic design of highway bridge foundations : [summary].
DOT National Transportation Integrated Search
2012-12-01
With recent seismic activity and earthquakes in Alaska and throughout the Pacific Rim, seismic design is becoming an increasingly important public safety concern for : highway bridge designers. Hoping to generate knowledge that can improve the seismi...
Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays
NASA Astrophysics Data System (ADS)
Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.
2014-12-01
Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.
NASA Astrophysics Data System (ADS)
Ahmed, Ali; Hasan, Rafiq; Pekau, Oscar A.
2016-12-01
Two recent developments have come into the forefront with reference to updating the seismic design provisions for codes: (1) publication of new seismic hazard maps for Canada by the Geological Survey of Canada, and (2) emergence of the concept of new spectral format outdating the conventional standardized spectral format. The fourth -generation seismic hazard maps are based on enriched seismic data, enhanced knowledge of regional seismicity and improved seismic hazard modeling techniques. Therefore, the new maps are more accurate and need to incorporate into the Canadian Highway Bridge Design Code (CHBDC) for its next edition similar to its building counterpart National Building Code of Canada (NBCC). In fact, the code writers expressed similar intentions with comments in the commentary of CHBCD 2006. During the process of updating codes, NBCC, and AASHTO Guide Specifications for LRFD Seismic Bridge Design, American Association of State Highway and Transportation Officials, Washington (2009) lowered the probability level from 10 to 2% and 10 to 5%, respectively. This study has brought five sets of hazard maps corresponding to 2%, 5% and 10% probability of exceedance in 50 years developed by the GSC under investigation. To have a sound statistical inference, 389 Canadian cities are selected. This study shows the implications of the changes of new hazard maps on the design process (i.e., extent of magnification or reduction of the design forces).
Analysis of the Earthquake Impact towards water-based fire extinguishing system
NASA Astrophysics Data System (ADS)
Lee, J.; Hur, M.; Lee, K.
2015-09-01
Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.
Raef, A.
2009-01-01
The recent proliferation of the 3D reflection seismic method into the near-surface area of geophysical applications, especially in response to the emergence of the need to comprehensively characterize and monitor near-surface carbon dioxide sequestration in shallow saline aquifers around the world, justifies the emphasis on cost-effective and robust quality control and assurance (QC/QA) workflow of 3D seismic data preprocessing that is suitable for near-surface applications. The main purpose of our seismic data preprocessing QC is to enable the use of appropriate header information, data that are free of noise-dominated traces, and/or flawed vertical stacking in subsequent processing steps. In this article, I provide an account of utilizing survey design specifications, noise properties, first breaks, and normal moveout for rapid and thorough graphical QC/QA diagnostics, which are easy to apply and efficient in the diagnosis of inconsistencies. A correlated vibroseis time-lapse 3D-seismic data set from a CO2-flood monitoring survey is used for demonstrating QC diagnostics. An important by-product of the QC workflow is establishing the number of layers for a refraction statics model in a data-driven graphical manner that capitalizes on the spatial coverage of the 3D seismic data. ?? China University of Geosciences (Wuhan) and Springer-Verlag GmbH 2009.
NASA Astrophysics Data System (ADS)
Chen, Linzhi; Lu, Xilin; Jiang, Huanjun; Zheng, Jianbo
2009-06-01
Reinforced concrete (RC) frame structures are one of the mostly common used structural systems, and their seismic performance is largely determined by the performance of columns and beams. This paper describes horizontal cyclic loading tests of ten column and three beam specimens, some of which were designed according to the current seismic design code and others were designed according to the early non-seismic Chinese design code, aiming at reporting the behavior of the damaged or collapsed RC frame strctures observed during the Wenchuan earthquake. The effects of axial load ratio, shear span ratio, and transverse and longitudinal reinforcement ratio on hysteresis behavior, ductility and damage progress were incorporated in the experimental study. Test results indicate that the non-seismically designed columns show premature shear failure, and yield larger maximum residual crack widths and more concrete spalling than the seismically designed columns. In addition, longitudinal steel reinforcement rebars were severely buckled. The axial load ratio and shear span ratio proved to be the most important factors affecting the ductility, crack opening width and closing ability, while the longitudinal reinforcement ratio had only a minor effect on column ductility, but exhibited more influence on beam ductility. Finally, the transverse reinforcement ratio did not influence the maximum residual crack width and closing ability of the seismically designed columns.
NASA Astrophysics Data System (ADS)
Sun, Baitao; Zhao, Hexian; Yan, Peilei
2017-08-01
The damage of masonry structures in earthquakes is generally more severe than other structures. Through the analysis of two typical earthquake damage buildings in the Wenchuan earthquake in Xuankou middle school, we found that the number of storeys and the construction measures had great influence on the seismic performance of masonry structures. This paper takes a teachers’ dormitory in Xuankou middle school as an example, selected the structure arrangement and storey number as two independent variables to design working conditions. Finally we researched on the seismic performance difference of masonry structure under two variables by finite element analysis method.
Seismic hazard assessment: Issues and alternatives
Wang, Z.
2011-01-01
Seismic hazard and risk are two very important concepts in engineering design and other policy considerations. Although seismic hazard and risk have often been used inter-changeably, they are fundamentally different. Furthermore, seismic risk is more important in engineering design and other policy considerations. Seismic hazard assessment is an effort by earth scientists to quantify seismic hazard and its associated uncertainty in time and space and to provide seismic hazard estimates for seismic risk assessment and other applications. Although seismic hazard assessment is more a scientific issue, it deserves special attention because of its significant implication to society. Two approaches, probabilistic seismic hazard analysis (PSHA) and deterministic seismic hazard analysis (DSHA), are commonly used for seismic hazard assessment. Although PSHA has been pro-claimed as the best approach for seismic hazard assessment, it is scientifically flawed (i.e., the physics and mathematics that PSHA is based on are not valid). Use of PSHA could lead to either unsafe or overly conservative engineering design or public policy, each of which has dire consequences to society. On the other hand, DSHA is a viable approach for seismic hazard assessment even though it has been labeled as unreliable. The biggest drawback of DSHA is that the temporal characteristics (i.e., earthquake frequency of occurrence and the associated uncertainty) are often neglected. An alternative, seismic hazard analysis (SHA), utilizes earthquake science and statistics directly and provides a seismic hazard estimate that can be readily used for seismic risk assessment and other applications. ?? 2010 Springer Basel AG.
Masonry Infilling Effect On Seismic Vulnerability and Performance Level of High Ductility RC Frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghalehnovi, M.; Shahraki, H.
2008-07-08
In last years researchers preferred behavior-based design of structure to force-based one for designing and construction of the earthquake-resistance structures, this method is named performance based designing. The main goal of this method is designing of structure members for a certain performance or behavior. On the other hand in most of buildings, load bearing frames are infilled with masonry materials which leads to considerable changes in mechanical properties of frames. But usually infilling wall's effect has been ignored in nonlinear analysis of structures because of complication of the problem and lack of simple logical solution. As a result lateral stiffness,more » strength, ductility and performance of the structure will be computed with less accuracy. In this paper by use of Smooth hysteretic model for masonry infillings, some high ductile RC frames (4, 8 stories including 1, 2 and 3 spans) designed according to Iranian code are considered. They have been analyzed by nonlinear dynamic method in two states, with and without infilling. Then their performance has been determined with criteria of ATC 40 and compared with recommended performance in Iranian seismic code (standard No. 2800)« less
Uncertainties in evaluation of hazard and seismic risk
NASA Astrophysics Data System (ADS)
Marmureanu, Gheorghe; Marmureanu, Alexandru; Ortanza Cioflan, Carmen; Manea, Elena-Florinela
2015-04-01
Two methods are commonly used for seismic hazard assessment: probabilistic (PSHA) and deterministic(DSHA) seismic hazard analysis.Selection of a ground motion for engineering design requires a clear understanding of seismic hazard and risk among stakeholders, seismologists and engineers. What is wrong with traditional PSHA or DSHA ? PSHA common used in engineering is using four assumptions developed by Cornell in 1968:(1)-Constant-in-time average occurrence rate of earthquakes; (2)-Single point source; (3).Variability of ground motion at a site is independent;(4)-Poisson(or "memory - less") behavior of earthquake occurrences. It is a probabilistic method and "when the causality dies, its place is taken by probability, prestigious term meant to define the inability of us to predict the course of nature"(Nils Bohr). DSHA method was used for the original design of Fukushima Daichii, but Japanese authorities moved to probabilistic assessment methods and the probability of exceeding of the design basis acceleration was expected to be 10-4-10-6 . It was exceeded and it was a violation of the principles of deterministic hazard analysis (ignoring historical events)(Klügel,J,U, EGU,2014, ISSO). PSHA was developed from mathematical statistics and is not based on earthquake science(invalid physical models- point source and Poisson distribution; invalid mathematics; misinterpretation of annual probability of exceeding or return period etc.) and become a pure numerical "creation" (Wang, PAGEOPH.168(2011),11-25). An uncertainty which is a key component for seismic hazard assessment including both PSHA and DSHA is the ground motion attenuation relationship or the so-called ground motion prediction equation (GMPE) which describes a relationship between a ground motion parameter (i.e., PGA,MMI etc.), earthquake magnitude M, source to site distance R, and an uncertainty. So far, no one is taking into consideration strong nonlinear behavior of soils during of strong earthquakes. But, how many cities, villages, metropolitan areas etc. in seismic regions are constructed on rock? Most of them are located on soil deposits? A soil is of basic type sand or gravel (termed coarse soils), silt or clay (termed fine soils) etc. The effect on nonlinearity is very large. For example, if we maintain the same spectral amplification factor (SAF=5.8942) as for relatively strong earthquake on May 3,1990(MW=6.4),then at Bacǎu seismic station for Vrancea earthquake on May 30,1990 (MW =6.9) the peak acceleration has to be a*max =0.154g and the actual recorded was only, amax =0.135g(-14.16%). Also, for Vrancea earthquake on August 30,1986(MW=7.1),the peak acceleration has to be a*max = 0.107g instead of real value recorded of 0.0736 g(- 45.57%). There are many data for more than 60 seismic stations. There is a strong nonlinear dependence of SAF with earthquake magnitude in each site. The authors are coming with an alternative approach called "real spectral amplification factors" instead of GMPE for all extra-Carpathian area where all cities and villages are located on soil deposits. Key words: Probabilistic Seismic Hazard; Uncertainties; Nonlinear seismology; Spectral amplification factors(SAF).
Seismic Vulnerability and Performance Level of confined brick walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghalehnovi, M.; Rahdar, H. A.
2008-07-08
There has been an increase on the interest of Engineers and designers to use designing methods based on displacement and behavior (designing based on performance) Regarding to the importance of resisting structure design against dynamic loads such as earthquake, and inability to design according to prediction of nonlinear behavior element caused by nonlinear properties of constructional material.Economically speaking, easy carrying out and accessibility of masonry material have caused an enormous increase in masonry structures in villages, towns and cities. On the other hand, there is a necessity to study behavior and Seismic Vulnerability in these kinds of structures since Iranmore » is located on the earthquake belt of Alpide.Different reasons such as environmental, economic, social, cultural and accessible constructional material have caused different kinds of constructional structures.In this study, some tied walls have been modeled with software and with relevant accelerator suitable with geology conditions under dynamic analysis to research on the Seismic Vulnerability and performance level of confined brick walls. Results from this analysis seem to be satisfactory after comparison of them with the values in Code ATC40, FEMA and standard 2800 of Iran.« less
NASA Astrophysics Data System (ADS)
Veeraian, Parthasarathi; Gandhi, Uma; Mangalanathan, Umapathy
2018-04-01
Seismic transducers are widely used for measurement of displacement, velocity, and acceleration. This paper presents the design of seismic transducer in the fractional domain for the measurement of displacement and acceleration. The fractional order transfer function for seismic displacement and acceleration transducer are derived using Grünwald-Letnikov derivative. Frequency response analysis of fractional order seismic displacement transducer (FOSDT) and fractional order seismic acceleration transducer (FOSAT) are carried out for different damping ratio with the different fractional order, and the maximum dynamic measurement range is identified. The results demonstrate that fractional order seismic transducer has increased dynamic measurement range and less phase distortion as compared to the conventional seismic transducer even with a lower damping ratio. Time response of FOSDT and FOSAT are derived analytically in terms of Mittag-Leffler function, the effect of fractional behavior in the time domain is evaluated from the impulse and step response. The fractional order system is found to have significantly reduced overshoot as compared to the conventional transducer. The fractional order seismic transducer design proposed in this paper is illustrated with a design example for FOSDT and FOSAT. Finally, an electrical equivalent of FOSDT and FOSAT is considered, and its frequency response is found to be in close agreement with the proposed fractional order seismic transducer.
Design Task 7 - Guidelines on Modeling and Acceptance Values Task 8 - Input Ground Motions for Tall - Performance-Based Seismic Design Guidelines for Tall Buildings Task 12 - Quantification of seismic performance published Report No. 2017/06 titled: "Guidelines for Performance-Based Seismic Design of Tall Buildings
Repeating ice-earthquakes beneath David Glacier from the 2012-2015 TAMNNET array
NASA Astrophysics Data System (ADS)
Walter, J. I.; Peng, Z.; Hansen, S. E.
2017-12-01
The continent of Antarctica has approximately the same surface area as the continental United States, though we know significantly less about its underlying geology and seismic activity. In recent years, improvements in seismic instrumentation, battery technology, and field deployment practices have allowed for continuous broadband stations throughout the dark Antarctic winter. We utilize broadband seismic data from a recent experiment (TAMNNET), which was originally proposed as a structural seismology experiment, for seismic event detection. Our target is to address fundamental questions about regional-scale crustal and environmental seismicity in the study region that comprises the Transantarctic Mountain area of Victoria and Oates Land. We identify most seismicity emanating from David Glacier, upstream of the Drygalski Ice Tongue, which has been documented by several other studies. In order to improve the catalog completeness for the David Glacier area, we utilize a matched-filter technique to identify potential missing earthquakes that may not have been originally detected. This technique utilizes existing cataloged waveforms as templates to scan through continuous data and to identify repeating or nearby earthquakes. With a more robust catalog, we evaluate relative changes in icequake positions, recurrence intervals, and other first-order information. In addition, we attempt to further refine locations of other regional seismicity using a variety of methods including body and surface wave polarization, beamforming, surface wave dispersion, and other seismological methods. This project highlights the usefulness of archiving raw datasets (i.e., passive seismic continuous data), so that researchers may apply new algorithms or techniques to test hypotheses not originally or specifically targeted by the original experimental design.
Study on safety level of RC beam bridges under earthquake
NASA Astrophysics Data System (ADS)
Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia
2017-08-01
This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin
2015-02-01
Seismic isolation (SI) has the potential to drastically reduce seismic response of structures, systems, or components (SSCs) and therefore the risk associated with large seismic events (large seismic event could be defined as the design basis earthquake (DBE) and/or the beyond design basis earthquake (BDBE) depending on the site location). This would correspond to a potential increase in nuclear safety by minimizing the structural response and thus minimizing the risk of material release during large seismic events that have uncertainty associated with their magnitude and frequency. The national consensus standard America Society of Civil Engineers (ASCE) Standard 4, Seismic Analysismore » of Safety Related Nuclear Structures recently incorporated language and commentary for seismically isolating a large light water reactor or similar large nuclear structure. Some potential benefits of SI are: 1) substantially decoupling the SSC from the earthquake hazard thus decreasing risk of material release during large earthquakes, 2) cost savings for the facility and/or equipment, and 3) applicability to both nuclear (current and next generation) and high hazard non-nuclear facilities. Issue: To date no one has evaluated how the benefit of seismic risk reduction reduces cost to construct a nuclear facility. Objective: Use seismic probabilistic risk assessment (SPRA) to evaluate the reduction in seismic risk and estimate potential cost savings of seismic isolation of a generic nuclear facility. This project would leverage ongoing Idaho National Laboratory (INL) activities that are developing advanced (SPRA) methods using Nonlinear Soil-Structure Interaction (NLSSI) analysis. Technical Approach: The proposed study is intended to obtain an estimate on the reduction in seismic risk and construction cost that might be achieved by seismically isolating a nuclear facility. The nuclear facility is a representative pressurized water reactor building nuclear power plant (NPP) structure. Figure 1: Project activities The study will consider a representative NPP reinforced concrete reactor building and representative plant safety system. This study will leverage existing research and development (R&D) activities at INL. Figure 1 shows the proposed study steps with the steps in blue representing activities already funded at INL and the steps in purple the activities that would be funded under this proposal. The following results will be documented: 1) Comparison of seismic risk for the non-seismically isolated (non-SI) and seismically isolated (SI) NPP, and 2) an estimate of construction cost savings when implementing SI at the site of the generic NPP.« less
Seismic hazard, risk, and design for South America
Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison
2018-01-01
We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best available science.
Numerical modeling of the 2017 active seismic infrasound balloon experiment
NASA Astrophysics Data System (ADS)
Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.
2017-12-01
We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
NASA Astrophysics Data System (ADS)
Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.
2005-12-01
The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data. Further, improved network security is an integral part of the new design. Redundant firewalls will provide secure data imports, exports, and acquisition as well as DMZ zones for web servers and other publicly available servers. We will present the detailed design of this new configuration that is currently being implemented by the SCSN at Caltech. The design principals are general enough to be of use to most regional seismic networks.
Schwartz, D.P.; Joyner, W.B.; Stein, R.S.; Brown, R.D.; McGarr, A.F.; Hickman, S.H.; Bakun, W.H.
1996-01-01
Summary -- The U.S. Geological Survey was requested by the U.S. Department of the Interior to review the design values and the issue of reservoir-induced seismicity for a concrete gravity dam near the site of the previously-proposed Auburn Dam in the western foothills of the Sierra Nevada, central California. The dam is being planned as a flood-control-only dam with the possibility of conversion to a permanent water-storage facility. As a basis for planning studies the U.S. Army Corps of Engineers is using the same design values approved by the Secretary of the Interior in 1979 for the original Auburn Dam. These values were a maximum displacement of 9 inches on a fault intersecting the dam foundation, a maximum earthquake at the site of magnitude 6.5, a peak horizontal acceleration of 0.64 g, and a peak vertical acceleration of 0.39 g. In light of geological and seismological investigations conducted in the western Sierran foothills since 1979 and advances in the understanding of how earthquakes are caused and how faults behave, we have developed the following conclusions and recommendations: Maximum Displacement. Neither the pre-1979 nor the recent observations of faults in the Sierran foothills precisely define the maximum displacement per event on a fault intersecting the dam foundation. Available field data and our current understanding of surface faulting indicate a range of values for the maximum displacement. This may require the consideration of a design value larger than 9 inches. We recommend reevaluation of the design displacement using current seismic hazard methods that incorporate uncertainty into the estimate of this design value. Maximum Earthquake Magnitude. There are no data to indicate that a significant change is necessary in the use of an M 6.5 maximum earthquake to estimate design ground motions at the dam site. However, there is a basis for estimating a range of maximum magnitudes using recent field information and new statistical fault relations. We recommend reevaluating the maximum earthquake magnitude using current seismic hazard methodology. Design Ground Motions. A large number of strong-motion records have been acquired and significant advances in understanding of ground motion have been achieved since the original evaluations. The design value for peak horizontal acceleration (0.64 g) is larger than the median of one recent study and smaller than the median value of another. The value for peak vertical acceleration (0.39 g) is somewhat smaller than median values of two recent studies. We recommend a reevaluation of the design ground motions that takes into account new ground motion data with particular attention to rock sites at small source distances. Reservoir-Induced Seismicity. The potential for reservoir-induced seismicity must be considered for the Auburn Darn project. A reservoir-induced earthquake is not expected to be larger than the maximum naturally occurring earthquake. However, the probability of an earthquake may be enhanced by reservoir impoundment. A flood-control-only project may involve a lower probability of significant induced seismicity than a multipurpose water-storage dam. There is a need to better understand and quantify the likelihood of this hazard. A methodology should be developed to quantify the potential for reservoir induced seismicity using seismicity data from the Sierran foothills, new worldwide observations of induced and triggered seismicity, and current understanding of the earthquake process. Reevaluation of Design Parameters. The reevaluation of the maximum displacement, maximum magnitude earthquake, and design ground motions can be made using available field observations from the Sierran foothills, updated statistical relations for faulting and ground motions, and current computational seismic hazard methodologies that incorporate uncertainty into the analysis. The reevaluation does not require significant new geological field studies.
NASA Astrophysics Data System (ADS)
Han, S. M.; Hahm, I.
2015-12-01
We evaluated the background noise level of seismic stations in order to collect the observation data of high quality and produce accurate seismic information. Determining of the background noise level was used PSD (Power Spectral Density) method by McNamara and Buland (2004) in this study. This method that used long-term data is influenced by not only innate electronic noise of sensor and a pulse wave resulting from stabilizing but also missing data and controlled by the specified frequency which is affected by the irregular signals without site characteristics. It is hard and inefficient to implement process that filters out the abnormal signal within the automated system. To solve these problems, we devised a method for extracting the data which normally distributed with 90 to 99% confidence intervals at each period. The availability of the method was verified using 62-seismic stations with broadband and short-period sensors operated by the KMA (Korea Meteorological Administration). Evaluation standards were NHNM (New High Noise Model) and NLNM (New Low Noise Model) published by the USGS (United States Geological Survey). It was designed based on the western United States. However, Korean Peninsula surrounded by the ocean on three sides has a complicated geological structure and a high population density. So, we re-designed an appropriate model in Korean peninsula by statistically combined result. The important feature is that secondary-microseism peak appeared at a higher frequency band. Acknowledgements: This research was carried out as a part of "Research for the Meteorological and Earthquake Observation Technology and Its Application" supported by the 2015 National Institute of Meteorological Research (NIMR) in the Korea Meteorological Administration.
Geophysical testing of rock and its relationships to physical properties
DOT National Transportation Integrated Search
2011-02-01
Testing techniques were designed to characterize spatial variability in geotechnical engineering physical parameters of : rock formations. Standard methods using seismic waves, which are routinely used for shallow subsurface : investigation, have lim...
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Kamarudin, A. F.; Rosli, S.; Fauziah, A.; Akmal, M. A. K.; Aziman, M.; Azhar, A. T. S.; Ashraf, M. I. M.; Shaylinda, M. Z. N.; Rais, Y.; Ishak, M. F.; Alel, M. N. A.
2018-04-01
Geotechnical site investigation as known as subsurface profile evaluation is the process of subsurface layer characteristics determination which finally used for design and construction phase. Traditionally, site investigation was performed using drilling technique thus suffers from several limitation due to cost, time, data coverage and sustainability. In order to overcome those problems, this study adopted surface techniques using seismic refraction and ambient vibration method for subsurface profile depth evaluation. Seismic refraction data acquisition and processing was performed using ABEM Terraloc and OPTIM software respectively. Meanwhile ambient vibration data acquisition and processing was performed using CityShark II, Lennartz and GEOPSY software respectively. It was found that studied area consist of two layers representing overburden and bedrock geomaterials based on p-wave velocity value (vp = 300 – 2500 m/s and vp > 2500 m/s) and natural frequency value (Fo = 3.37 – 3.90 Hz) analyzed. Further analysis found that both methods show some good similarity in term of depth and thickness with percentage accuracy at 60 – 97%. Consequently, this study has demonstrated that the application of seismic refractin and ambient vibration method was applicable in subsurface profile depth and thickness estimation. Moreover, surface technique which consider as non-destructive method adopted in this study was able to compliment conventional drilling method in term of cost, time, data coverage and environmental sustainaibility.
Seismic stops for nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloud, R.L.; Leung, J.S.M.; Anderson, P.H.
1989-10-01
In the regulated world of nuclear power, the need to have analytical proof of performance in hypothetical design-basis events such as earth quakes has placed a premium on design configurations that are mathematically tractable and easily analyzed. This is particularly true for the piping design. Depending on how the piping analyses are organized and on how old the plant is, there may be from 200 to 1000 separate piping runs to be designed, analyzed, and qualified. In this situation, the development of snubbers seemed like the answer to a piping engineer's prayer. At any place where seismic support was requiredmore » but thermal motion had to be accommodated, a snubber could be specified. But, as experience has now shown, the program was solved only on paper. This article presents an alternative to conventional snubbers. These new devices, termed Seismic Stops are designed to replace snubbers directly and look like snubbers on the outside. But their design is based on a completely different principle. The original concept has adapted from early seismic-resistant pipe support designs used on fossil power plants in California. The fundamental idea is to provide a space envelope in which the pipe can expand freely between the hot and cold positions, but cannot move outside the envelope. Seismic Stops are designed to transmit any possible impact load, as would occur in an earthquake, away from the pipe itself to the Seismic Stop. The Seismic Stop pipe support is shown.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Harris, D; Myers, S
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3Dmore » finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications.« less
NASA Astrophysics Data System (ADS)
Glezil, Dorothy
NEHRP's Provisions today currently governing conventional seismic resistant design. These provisions, though they ensure the life-safety of building occupants, extensive damage and economic losses may still occur in the structures. This minimum performance can be enhanced using the Performance-Based Earthquake Engineering methodology and passive control systems like base isolation and energy dissipation systems. Even though these technologies and the PBEE methodology are effective reducing economic losses and fatalities during earthquakes, getting them implemented into seismic resistant design has been challenging. One of the many barriers to their implementation has been their upfront costs. The green building community has faced some of the same challenges that the high performance seismic design community currently faces. The goal of this thesis is to draw on the success of the green building industry to provide recommendations that may be used overcome the barriers that high performance seismic design (HPSD) is currently facing.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 2: Example of Calculation
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The second part of the article illustrates the use of the time analysis method (TAM) by the example of the calculation of a 3-storey building, the design dynamic model (DDM) of which is adopted in the form of a flat vertical cantilever rod with 3 horizontal degrees of freedom associated with floor and coverage levels. The parameters of natural oscillations (frequencies and modes) and the results of the calculation of the elastic forced oscillations of the building’s DDM - oscillograms of the reaction parameters on the time interval t ∈ [0; 131,25] sec. The obtained results are analyzed on the basis of the computed values of the discrepancy of the DDS motion equation and the comparison of the results calculated on the basis of the numerical approach (FEM) and the normative method set out in SP 14.13330.2014 “Construction in Seismic Regions”. The data of the analysis testify to the accuracy of the construction of the computational model as well as the high accuracy of the results obtained. In conclusion, it is revealed that the use of the TAM will improve the strength of buildings and structures subject to seismic influences when designing them.
Petersen, Mark D.; Harmsen, Stephen C.; Rukstales, Kenneth S.; Mueller, Charles S.; McNamara, Daniel E.; Luco, Nicolas; Walling, Melanie
2012-01-01
American Samoa and the neighboring islands of the South Pacific lie near active tectonic-plate boundaries that host many large earthquakes which can result in strong earthquake shaking and tsunamis. To mitigate earthquake risks from future ground shaking, the Federal Emergency Management Agency requested that the U.S. Geological Survey prepare seismic hazard maps that can be applied in building-design criteria. This Open-File Report describes the data, methods, and parameters used to calculate the seismic shaking hazard as well as the output hazard maps, curves, and deaggregation (disaggregation) information needed for building design. Spectral acceleration hazard for 1 Hertz having a 2-percent probability of exceedance on a firm rock site condition (Vs30=760 meters per second) is 0.12 acceleration of gravity (1 second, 1 Hertz) and 0.32 acceleration of gravity (0.2 seconds, 5 Hertz) on American Samoa, 0.72 acceleration of gravity (1 Hertz) and 2.54 acceleration of gravity (5 Hertz) on Tonga, 0.15 acceleration of gravity (1 Hertz) and 0.55 acceleration of gravity (5 Hertz) on Fiji, and 0.89 acceleration of gravity (1 Hertz) and 2.77 acceleration of gravity (5 Hertz) on the Vanuatu Islands.
Effect of Different Groundwater Levels on Seismic Dynamic Response and Failure Mode of Sandy Slope
Huang, Shuai; Lv, Yuejun; Peng, Yanju; Zhang, Lifang; Xiu, Liwei
2015-01-01
Heavy seismic damage tends to occur in slopes when groundwater is present. The main objectives of this paper are to determine the dynamic response and failure mode of sandy slope subjected simultaneously to seismic forces and variable groundwater conditions. This paper applies the finite element method, which is a fast and efficient design tool in modern engineering analysis, to evaluate dynamic response of the slope subjected simultaneously to seismic forces and variable groundwater conditions. Shaking table test is conducted to analyze the failure mode and verify the accuracy of the finite element method results. The research results show that dynamic response values of the slope have different variation rules under near and far field earthquakes. And the damage location and pattern of the slope are different in varying groundwater conditions. The destruction starts at the top of the slope when the slope is in no groundwater, which shows that the slope appears obvious whipping effect under the earthquake. The destruction starts at the toe of the slope when the slope is in the high groundwater levels. Meanwhile, the top of the slope shows obvious seismic subsidence phenomenon after earthquake. Furthermore, the existence of the groundwater has a certain effect of damping. PMID:26560103
Scaled accelerographs for design of structures in Quetta, Baluchistan, Pakistan
NASA Astrophysics Data System (ADS)
Bhatti, Abdul Qadir
2016-12-01
Structural design for seismic excitation is usually based on peak values of forces and deformations over the duration of earthquake. In determining these peak values dynamic analysis is done which requires either response history analysis (RHA), also called time history analysis, or response spectrum analysis (RSA), both of which depend upon ground motion severity. In the past, PGA has been used to describe ground motion severity, because seismic force on a rigid body is proportional to the ground acceleration. However, it has been pointed out that single highest peak on accelerograms is a very unreliable description of the accelerograms as a whole. In this study, we are considering 0.2- and 1-s spectral acceleration. Seismic loading has been defined in terms of design spectrum and time history which will lead us to two methods of dynamic analysis. Design spectrum for Quetta will be constructed incorporating the parameters of ASCE 7-05/IBC 2006/2009, which is being used by modern codes and regulation of the world like IBC 2006/2009, ASCE 7-05, ATC-40, FEMA-356 and others. A suite of time history representing design earthquake will also be prepared, this will be a helpful tool to carryout time history dynamic analysis of structures in Quetta.
NASA Astrophysics Data System (ADS)
Dutta, Sekhar Chandra; Chakroborty, Suvonkar; Raychaudhuri, Anusrita
Vibration transmitted to the structure during earthquake may vary in magnitude over a wide range. Design methodology should, therefore, enumerates steps so that structures are able to survive in the event of even severe ground motion. However, on account of economic reason, the strengths can be provided to the structures in such a way that the structure remains in elastic range in low to moderate range earthquake and is allowed to undergo inelastic deformation in severe earthquake without collapse. To implement this design philosophy a rigorous nonlinear dynamic analysis is needed to be performed to estimate the inelastic demands. Furthermore, the same is time consuming and requires expertise to judge the results obtained from the same. In this context, the present paper discusses and demonstrates an alternative simple method known as Pushover method, which can be easily used by practicing engineers bypassing intricate nonlinear dynamic analysis and can be thought of as a substitute of the latter. This method is in the process of development and is increasingly becoming popular for its simplicity. The objective of this paper is to emphasize and demonstrate the basic concept, strength and ease of this state of the art methodology for regular use in design offices in performance based seismic design of structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Davis, R.A.
1982-01-01
To investigate the seismic risks associated with geopressured fluid production from the Pleasant Bayou No. 2 design well a seismic monitoring program was conducted in the vicinity of the Brazoria County design wells since 1979. The monitoring program was designed first to establish the nature of the local ambient seismicity prior to production, and second to provide continued surveillance of the area during the well tests to determine if production altered ambient seismic conditions significantly. The operation, data analyses, results and conclusions of the Brazoria seismic network during the operational period from 1 January through 31 December 1982 are described.
The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity
NASA Astrophysics Data System (ADS)
Schultz, Ryan; Telesca, Luciano
2018-05-01
In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.
Seismic Hazard analysis of Adjaria Region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Elashvili, Mikheil
2014-05-01
The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Wind/seismic comparisons for upgrading existing structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giller, R.A.
1989-10-01
This paper depicts the analysis procedures and methods used to evaluate three existing building structures for extreme wind loads. The three structures involved in this evaluation are located at the US Department of Energy's Hanford Site near Richland, Washington. This site is characterized by open flat grassland with few surrounding obstructions and has extreme winds in lieu of tornados as a design basis accident condition. This group of buildings represents a variety of construction types, including a concrete stack, a concrete load-bearing wall structure, and a rigid steel-frame building. The three structures included in this group have recently been evaluatedmore » for response to the design basis earthquake that included non-linear time history effects. The resulting loads and stresses from the wind analyses were compared to the loads and stresses resulting from seismic analyses. This approach eliminated the need to prepare additional capacity calculations that were already contained in the seismic evaluations. 4 refs., 5 figs., 5 tabs.« less
NASA Astrophysics Data System (ADS)
Aziz Zanjani, F.; Lin, G.
2016-12-01
Seismic activity in Oklahoma has greatly increased since 2013, when the number of wastewater disposal wells associated with oil and gas production was significantly increased in the area. An M5.8 earthquake at about 5 km depth struck near Pawnee, Oklahoma on September 3, 2016. This earthquake is postulated to be related with the anthropogenic activity in Oklahoma. In this study, we investigate the seismic characteristics in Oklahoma by using high-precision earthquake relocations and focal mechanisms. We acquire the seismic data between January 2013 and October 2016 recorded by the local and regional (within 200 km distance from the Pawnee mainshock) seismic stations from the Incorporated Research Institutions for Seismology (IRIS). We relocate all the earthquakes by applying the source-specific station term method and a differential time relocation method based on waveform cross-correlation data. The high-precision earthquake relocation catalog is then used to perform full-waveform modeling. We use Muller's reflection method for Green's function construction and the mtinvers program for moment tensor inversion. The sensitivity of the solution to the station and component distribution is evaluated by carrying out the Jackknife resampling. These earthquake relocation and focal mechanism results will help constrain the fault orientation and the earthquake rupture length. In order to examine the static Coulomb stress change due to the 2016 Pawnee earthquake, we utilize the Coulomb 3 software in the vicinity of the mainshock and compare the aftershock pattern with the calculated stress variation. The stress change in the study area can be translated into probability of seismic failure on other parts of the designated fault.
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
Reflection seismic imaging in the volcanic area of the geothermal field Wayang Windu, Indonesia
NASA Astrophysics Data System (ADS)
Polom, Ulrich; Wiyono, Wiyono; Pramono, Bambang; Krawczyk, CharLotte M.
2014-05-01
Reflection seismic exploration in volcanic areas is still a scientific challenge and requires major efforts to develop imaging workflows capable of an economic utilization, e.g., for geothermal exploration. The SESaR (Seismic Exploration and Safety Risk study for decentral geothermal plants in Indonesia) project therefore tackles still not well resolved issues concerning wave propagation or energy absorption in areas covered by pyroclastic sediments using both active P-wave and S-wave seismics. Site-specific exploration procedures were tested in different tectonic and lithological regimes to compare imaging conditions. Based on the results of a small-scale, active seismic pre-site survey in the area of the Wayang Windu geothermal field in November 2012, an additional medium-scale active seismic experiment using P-waves was carried out in August 2013. The latter experiment was designed to investigate local changes of seismic subsurface response, to expand the knowledge about capabilities of the vibroseis method for seismic surveying in regions covered by pyroclastic material, and to achieve higher depth penetration. Thus, for the first time in the Wayang Windu geothermal area, a powerful, hydraulically driven seismic mini-vibrator device of 27 kN peak force (LIAG's mini-vibrator MHV2.7) was used as seismic source instead of the weaker hammer blow applied in former field surveys. Aiming at acquiring parameter test and production data southeast of the Wayang Windu geothermal power plant, a 48-channel GEODE recording instrument of the Badan Geologi was used in a high-resolution configuration, with receiver group intervals of 5 m and source intervals of 10 m. Thereby, the LIAG field crew, Star Energy, GFZ Potsdam, and ITB Bandung acquired a nearly 600 m long profile. In general, we observe the successful applicability of the vibroseis method for such a difficult seismic acquisition environment. Taking into account the local conditions at Wayang Windu, the method is superior to the common seismic explosive source techniques, both with respect to production rate as well as resolution and data quality. Source signal frequencies of 20-80 Hz are most efficient for the attempted depth penetration, even though influenced by the dry subsurface conditions during the experiment. Depth penetration ranges between 0.5-1 km. Based on these new experimental data, processing workflows can be tested the first time for adapted imaging strategies. This will not only allow to focus on larger exploration depths covering the geothermal reservoir at the Wayang Windu power plant site itself, but also opens the possibility to transfer the lessons learned to other sites.
Update of bridge design standards in Alabama for AASHTO LRFD seismic design requirements.
DOT National Transportation Integrated Search
2013-11-01
The Alabama Department of Transportation (ALDOT) has been required to update their bridge design to the LRFD Bridge Design Specifications. This transition has resulted in changes to the seismic design standards of bridges in the state. Multiple bridg...
NASA Astrophysics Data System (ADS)
Sil, Arjun; Longmailai, Thaihamdau
2017-09-01
The lateral displacement of Reinforced Concrete (RC) frame building during an earthquake has an important impact on the structural stability and integrity. However, seismic analysis and design of RC building needs more concern due to its complex behavior as the performance of the structure links to the features of the system having many influencing parameters and other inherent uncertainties. The reliability approach takes into account the factors and uncertainty in design influencing the performance or response of the structure in which the safety level or the probability of failure could be ascertained. This present study, aims to assess the reliability of seismic performance of a four storey residential RC building seismically located in Zone-V as per the code provisions given in the Indian Standards IS: 1893-2002. The reliability assessment performed by deriving an explicit expression for maximum roof-lateral displacement as a failure function by regression method. A total of 319, four storey RC buildings were analyzed by linear static method using SAP2000. However, the change in the lateral-roof displacement with the variation of the parameters (column dimension, beam dimension, grade of concrete, floor height and total weight of the structure) was observed. A generalized relation established by regression method which could be used to estimate the expected lateral displacement owing to those selected parameters. A comparison made between the displacements obtained from analysis with that of the equation so formed. However, it shows that the proposed relation could be used directly to determine the expected maximum lateral displacement. The data obtained from the statistical computations was then used to obtain the probability of failure and the reliability.
Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method
NASA Astrophysics Data System (ADS)
Bin Mohamad, Edy Tonnizam; Saad, Rosli; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia
2010-12-01
Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.
Seismic design repair and retrofit strategies for steel roof deck diaphragms
NASA Astrophysics Data System (ADS)
Franquet, John-Edward
Structural engineers will often rely on the roof diaphragm to transfer lateral seismic loads to the bracing system of single-storey structures. The implementation of capacity-based design in the NBCC 2005 has caused an increase in the diaphragm design load due to the need to use the probable capacity of the bracing system, thus resulting in thicker decks, closer connector patterns and higher construction costs. Previous studies have shown that accounting for the in-plane flexibility of the diaphragm when calculating the overall building period can result in lower seismic forces and a more cost-efficient design. However, recent studies estimating the fundamental period of single storey structures using ambient vibration testing showed that the in-situ approximation was much shorter than that obtained using analytical means. The difference lies partially in the diaphragm stiffness characteristics which have been shown to decrease under increasing excitation amplitude. Using the diaphragm as the energy-dissipating element in the seismic force resisting system has also been investigated as this would take advantage of the diaphragm's ductility and limited overstrength; thus, lower capacity based seismic forces would result. An experimental program on 21.0m by 7.31m diaphragm test specimens was carried out so as to investigate the dynamic properties of diaphragms including the stiffness, ductility and capacity. The specimens consisted of 20 and 22 gauge panels with nailed frame fasteners and screwed sidelap connections as well a welded and button-punch specimen. Repair strategies for diaphragms that have previously undergone inelastic deformations were devised in an attempt to restitute the original stiffness and strength and were then experimentally evaluated. Strength and stiffness experimental estimations are compared with those predicted with the Steel Deck Institute (SDI) method. A building design comparative study was also completed. This study looks at the difference in design and cost yielded by previous and current design practice with EBF braced frames. Two alternate design methodologies, where the period is not restricted by code limitations and where the diaphragm force is limited to the equivalent shear force calculated with RdR o = 1.95, are also used for comparison. This study highlights the importance of incorporating the diaphragm stiffness in design and the potential cost savings.
78 FR 13911 - Proposed Revision to Design of Structures, Components, Equipment and Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... Analysis Reports for Nuclear Power Plants: LWR Edition,'' Section 3.7.1, ``Seismic Design Parameters,'' Section 3.7.2, ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1... and analysis issues, (2) updates to review interfaces to improve the efficiency and consistency of...
NASA Astrophysics Data System (ADS)
Malagnini, Luca; Herrmann, Robert B.; Munafò, Irene; Buttinelli, Mauro; Anselmi, Mario; Akinci, Aybige; Boschi, E.
2012-10-01
Inadequate seismic design codes can be dangerous, particularly when they underestimate the true hazard. In this study we use data from a sequence of moderate-sized earthquakes in northeast Italy to validate and test a regional wave propagation model which, in turn, is used to understand some weaknesses of the current design spectra. Our velocity model, while regionalized and somewhat ad hoc, is consistent with geophysical observations and the local geology. In the 0.02-0.1 Hz band, this model is validated by using it to calculate moment tensor solutions of 20 earthquakes (5.6 ≥ MW ≥ 3.2) in the 2012 Ferrara, Italy, seismic sequence. The seismic spectra observed for the relatively small main shock significantly exceeded the design spectra to be used in the area for critical structures. Observations and synthetics reveal that the ground motions are dominated by long-duration surface waves, which, apparently, the design codes do not adequately anticipate. In light of our results, the present seismic hazard assessment in the entire Pianura Padana, including the city of Milan, needs to be re-evaluated.
An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition
NASA Astrophysics Data System (ADS)
Mousavi, S. M.; Langston, C. A.
2016-12-01
Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.
Performance-based design factors for pile foundations.
DOT National Transportation Integrated Search
2014-10-01
The seismic design of pile foundations is currently performed in a relatively simple, deterministic manner. This : report describes the development of a performance-based framework to create seismic designs of pile group : foundations that consider a...
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
Effectiveness of damped braces to mitigate seismic torsional response of unsymmetric-plan buildings
NASA Astrophysics Data System (ADS)
Mazza, Fabio; Pedace, Emilia; Favero, Francesco Del
2017-02-01
The seismic retrofitting of unsymmetric-plan reinforced concrete (r.c.) framed buildings can be carried out by the incorporation of damped braces (DBs). Yet most of the proposals to mitigate the seismic response of asymmetric framed buildings by DBs rest on the hypothesis of elastic (linear) structural response. The aim of the present work is to evaluate the effectiveness and reliability of a Displacement-Based Design procedure of hysteretic damped braces (HYDBs) based on the nonlinear behavior of the frame members, which adopts the extended N2 method considered by Eurocode 8 to evaluate the higher mode torsional effects. The Town Hall of Spilinga (Italy), a framed structure with an L-shaped plan built at the beginning of the 1960s, is supposed to be retrofitted with HYDBs to attain performance levels imposed by the Italian seismic code (NTC08) in a high-risk zone. Ten structural solutions are compared by considering two in-plan distributions of the HYDBs, to eliminate (elastic) torsional effects, and different design values of the frame ductility combined with a constant design value of the damper ductility. A computer code for the nonlinear dynamic analysis of r.c. spatial framed structures is adopted to evaluate the critical incident angle of bidirectional earthquakes. Beams and columns are simulated with a lumped plasticity model, including flat surface modeling of the axial load-biaxial bending moment elastic domain at the end sections, while a bilinear law is used to idealize the behavior of the HYDBs. Damage index domains are adopted to estimate the directions of least seismic capacity, considering artificial earthquakes whose response spectra match those adopted by NTC08 at serviceability and ultimate limit states.
Method of migrating seismic records
Ober, Curtis C.; Romero, Louis A.; Ghiglia, Dennis C.
2000-01-01
The present invention provides a method of migrating seismic records that retains the information in the seismic records and allows migration with significant reductions in computing cost. The present invention comprises phase encoding seismic records and combining the encoded seismic records before migration. Phase encoding can minimize the effect of unwanted cross terms while still allowing significant reductions in the cost to migrate a number of seismic records.
78 FR 59732 - Revisions to Design of Structures, Components, Equipment, and Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
...,'' Section 3.7.2, ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1... Analysis,'' (Accession No. ML13198A223); Section 3.7.3, ``Seismic Subsystem Analysis,'' (Accession No..., ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1, ``Concrete...
Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity
NASA Astrophysics Data System (ADS)
Thomas, Abey E.
2018-05-01
Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.
DOT National Transportation Integrated Search
2017-01-05
This report presents the analytical study of the shear capacity of reinforced concrete columns using both the AASHTO LRFD Bridge Design Specifications and the AASHTO Guide Specifications for the LRFD Seismic Bridge Design. The study investigates vari...
Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Jeff
2012-11-30
Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have beenmore » the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.« less
2008 United States National Seismic Hazard Maps
Petersen, M.D.; ,
2008-01-01
The U.S. Geological Survey recently updated the National Seismic Hazard Maps by incorporating new seismic, geologic, and geodetic information on earthquake rates and associated ground shaking. The 2008 versions supersede those released in 1996 and 2002. These maps are the basis for seismic design provisions of building codes, insurance rate structures, earthquake loss studies, retrofit priorities, and land-use planning. Their use in design of buildings, bridges, highways, and critical infrastructure allows structures to better withstand earthquake shaking, saving lives and reducing disruption to critical activities following a damaging event. The maps also help engineers avoid costs from over-design for unlikely levels of ground motion.
NASA Astrophysics Data System (ADS)
Frassetto, A.; Busby, R. W.; Hafner, K.; Woodward, R.; Sauter, A.
2013-12-01
In preparation for the upcoming deployment of EarthScope's USArray Transportable Array (TA) in Alaska, the National Science Foundation (NSF) has supported exploratory work on seismic station design, sensor emplacement, and communication concepts appropriate for this challenging high-latitude environment. IRIS has installed several experimental stations to evaluate different sensor emplacement schemes both in Alaska and in the lower-48 of the U.S. The goal of these tests is to maintain or enhance a station's noise performance while minimizing its footprint and the weight of the equipment, materials, and overall expense required for its construction. Motivating this approach are recent developments in posthole broadband seismometer design and the unique conditions for operating in Alaska, where there are few roads, cellular communications are scarce, most areas are only accessible by small plane or helicopter, and permafrost underlies much of the state. We will review the methods used for directly emplacing broadband seismometers in comparison to the current methods used for the lower-48 TA. These new methods primarily focus on using a portable drill to make a bored hole three to five meters, beneath the active layer of the permafrost, or by coring 1-2 meters deep into surface bedrock. Both methods are logistically effective in preliminary trials. Subsequent station performance has been assessed quantitatively using probability density functions summed from power spectral density estimates. These are calculated for the continuous time series of seismic data recorded for each channel of the seismometer. There are five test stations currently operating in Alaska. One was deployed in August 2011 and the remaining four in October 2012. Our results show that the performance of seismometers in Alaska with auger-hole or core-hole installations can sometimes exceed that of the quietest TA stations in the lower-48, particularly horizontal components at long periods. A comparison of the performance of the various installations is discussed.
NASA Astrophysics Data System (ADS)
Mannon, Timothy Patrick, Jr.
Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.
Method can improve efficiency of heli-portable seismic operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kingsbury, O.J.
1995-11-13
There are regions of the world where the only viable means of conducting a seismic survey on land must involve helicopters as a primary means of transport. The high operating cost of helicopters means that such heliportable work is expensive compared with the more common land and marine surveys. This article is addressed to exploration companies contemplating heliportable seismic surveys. Its aim is to show how these operations work and to enable a dramatic reduction in the cost and timescale of future operations compared with numerous operations this writer has witnessed and been involved with in recent years. The coremore » of this article concerns distinct designs of drilling machinery used in these activities and the most efficient ways of configuring this machinery in the field.« less
NASA Astrophysics Data System (ADS)
Gogoladze, Z.; Moscatelli, M.; Giallini, S.; Avalle, A.; Gventsadze, A.; Kvavadze, N.; Tsereteli, N.
2016-12-01
Seismic risk is a crucial issue for South Caucasus, which is the main gateway between Asia and Europe. The goal of this work is to propose new methods and criteria for defining an overall approach aimed at assessing and mitigating seismic risk in Georgia. In this reguard seismic microzonation represents a highly useful tool for seismic risk assessmentin land management, for design of buildings or structures and for emergency planning.Seismic microzonation assessment of local seismic hazard,which is a component of seismicity resulting from specific local characteristics which cause local amplification and soil instability, through identification of zones with seismically homogeneous behavior. This paper presents the results of preliminary study of seismic microzonation of Gori, Georgia. Gori is and is located in the Shida Kartli region and on both sides of Liachvi and Mtkvari rivers, with area of about 135 km2around the Gori fortress. Gori is located in Achara-Trialeti fold-thrust belt, that is tectonically unstable. Half of all earthquakes in Gori area with magnitude M≥3.5 have happened along this fault zone and on basis of damage caused by previous earthquakes, this territory show the highest level of risk (the maximum value of direct losses) in central part of the town. The seismic microzonation map of level 1 for Gori was carried out using: 1) Already available data (i.e., topographic map and boreholes data), 2) Results of new geological surveys and 3) Geophysical measurements (i.e., MASW and noise measurements processed with HVSR technique). Our preliminary results highlight the presence of both stable zones susceptible to local amplifications and unstable zones susceptible to geological instability. Our results are directed to establish set of actions aimed at risk mitigation before initial onset of emergency, and to management of the emergency once the seismic event has occurred. The products obtained, will contain the basic elements of an integrated system aimed at reducing risk and improving over all safety of people and infrastructure in Georgia.
Seismic Design of a Single Bored Tunnel: Longitudinal Deformations and Seismic Joints
NASA Astrophysics Data System (ADS)
Oh, J.; Moon, T.
2018-03-01
The large diameter bored tunnel passing through rock and alluvial deposits subjected to seismic loading is analyzed for estimating longitudinal deformations and member forces on the segmental tunnel liners. The project site has challenges including high hydrostatic pressure, variable ground profile and high seismic loading. To ensure the safety of segmental tunnel liner from the seismic demands, the performance-based two-level design earthquake approach, Functional Evaluation Earthquake and Safety Evaluation Earthquake, has been adopted. The longitudinal tunnel and ground response seismic analyses are performed using a three-dimensional quasi-static linear elastic and nonlinear elastic discrete beam-spring elements to represent segmental liner and ground spring, respectively. Three components (longitudinal, transverse and vertical) of free-field ground displacement-time histories evaluated from site response analyses considering wave passage effects have been applied at the end support of the strain-compatible ground springs. The result of the longitudinal seismic analyses suggests that seismic joint for the mitigation measure requiring the design deflection capacity of 5-7.5 cm is to be furnished at the transition zone between hard and soft ground condition where the maximum member forces on the segmental liner (i.e., axial, shear forces and bending moments) are induced. The paper illustrates how detailed numerical analyses can be practically applied to evaluate the axial and curvature deformations along the tunnel alignment under difficult ground conditions and to provide the seismic joints at proper locations to effectively reduce the seismic demands below the allowable levels.
NASA Astrophysics Data System (ADS)
Valente, Marco; Milani, Gabriele
2017-07-01
Many existing reinforced concrete buildings in Southern Europe were built (and hence designed) before the introduction of displacement based design in national seismic codes. They are obviously highly vulnerable to seismic actions. In such a situation, simplified methodologies for the seismic assessment and retrofitting of existing structures are required. In this study, a displacement based procedure using non-linear static analyses is applied to a four-story existing RC frame. The aim is to obtain an estimation of its overall structural inadequacy as well as the effectiveness of a specific retrofitting intervention by means of GFRP laminates and RC jacketing. Accurate numerical models are developed within a displacement based approach to reproduce the seismic response of the RC frame in the original configuration and after strengthening.
NASA Astrophysics Data System (ADS)
Larsen, C. F.; Bartholomaus, T. C.; O'Neel, S.; West, M. E.
2010-12-01
We observe ice motion, calving and seismicity simultaneously and with high-resolution on an advancing tidewater glacier in Icy Bay, Alaska. Icy Bay’s tidewater glaciers dominate regional glacier-generated seismicity in Alaska. Yahtse emanates from the St. Elias Range near the Bering-Bagley-Seward-Malaspina Icefield system, the most extensive glacier cover outside the polar regions. Rapid rates of change and fast flow (>16 m/d near the terminus) at Yahtse Glacier provide a direct analog to the disintegrating outlet systems in Greenland. Our field experiment co-locates GPS and seismometers on the surface of the glacier, with a greater network of bedrock seismometers surrounding the glacier. Time-lapse photogrammetry, fjord wave height sensors, and optical survey methods monitor iceberg calving and ice velocity near the terminus. This suite of geophysical instrumentation enables us to characterize glacier motion and geometry changes while concurrently listening for seismic energy release. We are performing a close examination of calving as a seismic source, and the associated mechanisms of energy transfer to seismic waves. Detailed observations of ice motion (GPS and optical surveying), glacier geometry and iceberg calving (direct observations and timelapse photogrammetry) have been made in concert with a passive seismic network. Combined, the observations form the basis of a rigorous analysis exploring the relationship between glacier-generated seismic events and motion, glacier-fiord interactions, calving and hydraulics. Our work is designed to demonstrate the applicability and utility of seismology to study the impact of climate forcing on calving glaciers.
Proceedings, Seminar on Probabilistic Methods in Geotechnical Engineering
NASA Astrophysics Data System (ADS)
Hynes-Griffin, M. E.; Buege, L. L.
1983-09-01
Contents: Applications of Probabilistic Methods in Geotechnical Engineering; Probabilistic Seismic and Geotechnical Evaluation at a Dam Site; Probabilistic Slope Stability Methodology; Probability of Liquefaction in a 3-D Soil Deposit; Probabilistic Design of Flood Levees; Probabilistic and Statistical Methods for Determining Rock Mass Deformability Beneath Foundations: An Overview; Simple Statistical Methodology for Evaluating Rock Mechanics Exploration Data; New Developments in Statistical Techniques for Analyzing Rock Slope Stability.
49 CFR 41.110 - New DOT owned buildings and additions to buildings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... architect's authenticated verifications of seismic design codes, standards, and practices used in the design... for the design and construction of new DOT Federally owned buildings will ensure that each building is designed and constructed in accord with the seismic design and construction standards set out in § 41.120...
Design and prototype tests of a seismic attenuation system for the advanced-LIGO output mode cleaner
NASA Astrophysics Data System (ADS)
Bertolini, A.; DeSalvo, R.; Galli, C.; Gennaro, G.; Mantovani, M.; Márka, S.; Sannibale, V.; Takamori, A.; Torrie, C.
2006-04-01
Both present LIGO and advanced LIGO (Ad-LIGO) will need an output mode cleaner (OMC) to reach the desired sensitivity. We designed a suitable OMC seismically attenuated optical table fitting to the existing vacuum chambers (horizontal access module, HAM chambers). The most straightforward and cost-effective solution satisfying the Ad-LIGO seismic attenuation specifications was to implement a single passive seismic attenuation stage, derived from the 'seismic attenuation system' (SAS) concept. We built and tested prototypes of all critical components. On the basis of these tests and past experience, we expect that the passive attenuation performance of this new design, called HAM-SAS, will match all requirements for the LIGO OMC, and all Ad-LIGO optical tables. Its performance can be improved, if necessary, by implementation of a simple active attenuation loop at marginal additional cost. The design can be easily modified to equip the LIGO basic symmetric chamber (BSC) chambers and leaves space for extensive performance upgrades for future evolutions of Ad-LIGO. Design parameters and prototype test results are presented.
NASA Astrophysics Data System (ADS)
Takao, M.; Mizutani, H.
2009-05-01
At about 10:13 on July 16, 2007, a strong earthquake named 'Niigata-ken Chuetsu-oki Earthquake' of Mj6.8 on Japan Meteorological Agencyfs scale occurred offshore Niigata prefecture in Japan. However, all of the nuclear reactors at Kashiwazaki-Kariwa Nuclear Power Station (KKNPS) in Niigata prefecture operated by Tokyo Electric Power Company shut down safely. In other words, automatic safety function composed of shutdown, cooling and containment worked as designed immediately after the earthquake. During the earthquake, the peak acceleration of the ground motion exceeded the design-basis ground motion (DBGM), but the force due to the earthquake applied to safety-significant facilities was about the same as or less than the design basis taken into account as static seismic force. In order to assess anew the safety of nuclear power plants, we have evaluated a new DBGM after conducting geomorphological, geological, geophysical, seismological survey and analyses. [Geomorphological, Geological and Geophysical survey] In the land area, aerial photograph interpretation was performed at least within the 30km radius to extract geographies that could possibly be tectonic reliefs as a geomorphological survey. After that, geological reconnaissance was conducted to confirm whether the extracted landforms are tectonic reliefs or not. Especially we carefully investigated Nagaoka Plain Western Boundary Fault Zone (NPWBFZ), which consists of Kakuda-Yahiko fault, Kihinomiya fault and Katakai fault, because NPWBFZ is the one of the active faults which have potential of Mj8 class in Japan. In addition to the geological survey, seismic reflection prospecting of approximate 120km in total length was completed to evaluate the geological structure of the faults and to assess the consecutiveness of the component faults of NPWBFZ. As a result of geomorphological, geological and geophysical surveys, we evaluated that the three component faults of NPWBFZ are independent to each other from the viewpoint of geological structure, however we have decided to take into consideration simultaneous movement of the three faults which is 91km long in seismic design as a case of uncertainty. In the sea area, we conducted seismic reflection prospecting with sonic wave in the area stretching for about 140km along the coastline and 50km in the direction of perpendicular to the coastline. When we analyze the seismic profiles, we evaluated the activities of faults and foldings carefully on the basis of the way of thinking of 'fault-related-fault' because the sedimentary layers in the offing of Niigata prefecture are very thick and the geological structures are characterized by foldings. As a result of the seismic reflection survey and analyses, we assess that five active faults (foldings) to be taken into consideration to seismic design in the sea area and we evaluated that the F-B fault of 36km will have the largest impact on the KKNPS. [Seismological survey] As a result of analyses of the geological survey, data from NCOE and data from 2004 Chuetsu Earthquake, it became clear that there are factors that intensifies seismic motions in this area. For each of the two selected earthquake sources, namely NPWBFZ and F-B fault, we calculated seismic ground motions on the free surface of the base stratum as the design-basis ground motion (DBGM) Ss, using both empirical and numerical ground motion evaluation method. PGA value of DBGM is 2,300Gal for unit 1 to 4 located in the southern part of the KKNPS and 1,050Gal for unit 5 to 7 in the northern part of the site.
Development of Maximum Considered Earthquake Ground Motion Maps
Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.
2000-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.
41 CFR 128-1.8004 - Seismic Safety Coordinators.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Seismic Safety... Management Regulations System (Continued) DEPARTMENT OF JUSTICE 1-INTRODUCTION 1.80-Seismic Safety Program § 128-1.8004 Seismic Safety Coordinators. (a) The Justice Management Division shall designate an...
41 CFR 128-1.8004 - Seismic Safety Coordinators.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Seismic Safety... Management Regulations System (Continued) DEPARTMENT OF JUSTICE 1-INTRODUCTION 1.80-Seismic Safety Program § 128-1.8004 Seismic Safety Coordinators. (a) The Justice Management Division shall designate an...
41 CFR 128-1.8004 - Seismic Safety Coordinators.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Seismic Safety... Management Regulations System (Continued) DEPARTMENT OF JUSTICE 1-INTRODUCTION 1.80-Seismic Safety Program § 128-1.8004 Seismic Safety Coordinators. (a) The Justice Management Division shall designate an...
41 CFR 128-1.8004 - Seismic Safety Coordinators.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Seismic Safety... Management Regulations System (Continued) DEPARTMENT OF JUSTICE 1-INTRODUCTION 1.80-Seismic Safety Program § 128-1.8004 Seismic Safety Coordinators. (a) The Justice Management Division shall designate an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Braverman, J.; Hofmayer, C.
2010-06-30
The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structuresmore » and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. In the Year 1 scope of work, BNL collected and reviewed degradation occurrences in US NPPs and identified important aging characteristics needed for the seismic capability evaluations. This information is presented in the Annual Report for the Year 1 Task, identified as BNL Report-81741-2008 and also designated as KAERI/RR-2931/2008. The report presents results of the statistical and trending analysis of this data and compares the results to prior aging studies. In addition, the report provides a description of U.S. current regulatory requirements, regulatory guidance documents, generic communications, industry standards and guidance, and past research related to aging degradation of SSCs. In the Year 2 scope of work, BNL carried out a research effort to identify and assess degradation models for the long-term behavior of dominant materials that are determined to be risk significant to NPPs. Multiple models have been identified for concrete, carbon and low-alloy steel, and stainless steel. These models are documented in the Annual Report for the Year 2 Task, identified as BNL Report-82249-2009 and also designated as KAERI/TR-3757/2009. This report describes the research effort performed by BNL for the Year 3 scope of work. The objective is for BNL to develop the seismic fragility capacity for a condensate storage tank with various degradation scenarios. The conservative deterministic failure margin method has been utilized for the undegraded case and has been modified to accommodate the degraded cases. A total of five seismic fragility analysis cases have been described: (1) undegraded case, (2) degraded stainless tank shell, (3) degraded anchor bolts, (4) anchorage concrete cracking, and (5)a perfect combination of the three degradation scenarios. Insights from these fragility analyses are also presented.« less
76 FR 39133 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... assess the adequacy of proposed seismic design bases and the design bases for other site hazards for... sited, designed, constructed, and maintained to withstand geologic hazards, such as faulting, seismic... potential man-made hazards will be appropriately accounted for in the design of nuclear power and test...
49 CFR 41.115 - New buildings to be leased for DOT occupancy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... compliance may include the engineer's and architect's authenticated verifications of seismic design codes... design and construction of new buildings to be leased for DOT occupancy or use will ensure that each building is designed and constructed in accord with the seismic design and construction standards set out...
49 CFR 41.115 - New buildings to be leased for DOT occupancy.
Code of Federal Regulations, 2010 CFR
2010-10-01
... compliance may include the engineer's and architect's authenticated verifications of seismic design codes... design and construction of new buildings to be leased for DOT occupancy or use will ensure that each building is designed and constructed in accord with the seismic design and construction standards set out...
Design and implementation of a unified certification management system based on seismic business
NASA Astrophysics Data System (ADS)
Tang, Hongliang
2018-04-01
Many business software for seismic systems are based on web pages, users can simply open a browser and enter their IP address. However, how to achieve unified management and security management of many IP addresses, this paper introduces the design concept based on seismic business and builds a unified authentication management system using ASP technology.
Dealing With Shallow-Water Flow in the Deepwater Gulf of Mexico
NASA Astrophysics Data System (ADS)
Ostermeier, R.
2006-05-01
Some of the Shell experience in dealing with the shallow-water flow problem in the Deepwater Gulf of Mexico (GOM) will be presented. The nature of the problem, including areal extent and over-pressuring mechanisms, will be discussed. Methods for sand prediction and shallow sediment and flow characterization will be reviewed. These include seismic techniques, the use of geo-technical wells, regional trends, and various MWD methods. Some examples of flow incidents with pertinent drilling issues, including well failures and abandonment, will be described. To address the shallow-water flow problem, Shell created a multi-disciplinary team of specialists in geology, geophysics, petrophysics, drilling, and civil engineering. The team developed several methodologies to deal with various aspects of the problem. These include regional trends and data bases, shallow seismic interpretation and sand prediction, well site and casing point selection, geo-technical well design and data interpretation, logging program design and interpretation, cementing design and fluids formulation, methods for remediation and mitigation of lost circulation, and so on. Shell's extensive Deepwater GOM drilling experience has lead to new understanding of the problem. Examples include delineation of trends in shallow water flow occurrence and severity, trends and departures in PP/FG, rock properties pertaining to seismic identification of sands, and so on. New knowledge has also been acquired through the use of geo-technical wells. One example is the observed rapid onset and growth of over-pressures below the mudline. Total trouble costs due to shallow water flow for all GOM operators almost certainly runs into the several hundred million dollars. Though the problem remains a concern, advances in our knowledge and understanding make it a problem that is manageable and not the "show stopper" once feared.
NASA Astrophysics Data System (ADS)
Gu, N.; Zhang, H.
2017-12-01
Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.
Estimating Local and Near-Regional Velocity and Attenuation Structure from Seismic Noise
2008-09-30
seismic array in Costa Rica and Nicaragua from ambient seismic noise using two independent methods, noise cross correlation and beamforming. The noise...Mean-phase velocity-dispersion curves are calculated for the TUCAN seismic array in Costa Rica and Nicaragua from ambient seismic noise using two...stations of the TUCAN seismic array (Figure 4c) using a method similar to Harmon et al. (2007). Variations from Harmon et al. (2007) include removing the
49 CFR 41.119 - DOT regulated buildings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... compliance may include the engineer's and architect's authenticated verification of seismic design codes... and additions to existing buildings will ensure that each DOT regulated building is designed and constructed in accord with seismic design and construction standards as provided by this part. (b) This...
DSOD Procedures for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Howard, J. K.; Fraser, W. A.
2005-12-01
DSOD, which has jurisdiction over more than 1200 dams in California, routinely evaluates their dynamic stability using seismic shaking input ranging from simple pseudostatic coefficients to spectrally matched earthquake time histories. Our seismic hazard assessments assume maximum earthquake scenarios of nearest active and conditionally active seismic sources. Multiple earthquake scenarios may be evaluated depending on sensitivity of the design analysis (e.g., to certain spectral amplitudes, duration of shaking). Active sources are defined as those with evidence of movement within the last 35,000 years. Conditionally active sources are those with reasonable expectation of activity, which are treated as active until demonstrated otherwise. The Division's Geology Branch develops seismic hazard estimates using spectral attenuation formulas applicable to California. The formulas were selected, in part, to achieve a site response model similar to the 2000 IBC's for rock, soft rock, and stiff soil sites. The level of dynamic loading used in the stability analysis (50th, 67th, or 84th percentile ground shaking estimates) is determined using a matrix that considers consequence of dam failure and fault slip rate. We account for near-source directivity amplification along such faults by adjusting target response spectra and developing appropriate design earthquakes for analysis of structures sensitive to long-period motion. Based on in-house studies, the orientation of the dam analysis section relative to the fault-normal direction is considered for strike-slip earthquakes, but directivity amplification is assumed in any orientation for dip-slip earthquakes. We do not have probabilistic standards, but we evaluate the probability of our ground shaking estimates using hazard curves constructed from the USGS Interactive De-Aggregation website. Typically, return periods for our design loads exceed 1000 years. Excessive return periods may warrant a lower design load. Minimum shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.
NASA Astrophysics Data System (ADS)
Liang, Li; Takaaki, Ohkubo; Guang-hui, Li
2018-03-01
In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.
NASA Astrophysics Data System (ADS)
Wang, Yayong
2010-06-01
A large number of buildings were seriously damaged or collapsed in the “5.12” Wenchuan earthquake. Based on field surveys and studies of damage to different types of buildings, seismic design codes have been updated. This paper briefly summarizes some of the major revisions that have been incorporated into the “Standard for classification of seismic protection of building constructions GB50223-2008” and “Code for Seismic Design of Buildings GB50011-2001.” The definition of seismic fortification class for buildings has been revisited, and as a result, the seismic classifications for schools, hospitals and other buildings that hold large populations such as evacuation shelters and information centers have been upgraded in the GB50223-2008 Code. The main aspects of the revised GB50011-2001 code include: (a) modification of the seismic intensity specified for the Provinces of Sichuan, Shanxi and Gansu; (b) basic conceptual design for retaining walls and building foundations in mountainous areas; (c) regularity of building configuration; (d) integration of masonry structures and pre-cast RC floors; (e) requirements for calculating and detailing stair shafts; and (f) limiting the use of single-bay RC frame structures. Some significant examples of damage in the epicenter areas are provided as a reference in the discussion on the consequences of collapse, the importance of duplicate structural systems, and the integration of RC and masonry structures.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
The design of L1-norm visco-acoustic wavefield extrapolators
NASA Astrophysics Data System (ADS)
Salam, Syed Abdul; Mousa, Wail A.
2018-04-01
Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.
NASA Astrophysics Data System (ADS)
Zhou, L.; Xiao, G.
2014-12-01
The engineering geological and hydrological conditions of current tunnels are more and more complicated, as the tunnels are elongated with deeper depth. In constructing these complicated tunnels, geological hazards prone to occur as induced by unfavorable geological bodies, such as fault zones, karst or hydrous structures, etc. The working emphasis and difficulty of the advanced geological exploration for complicated tunnels are mainly focused on the structure and water content of these unfavorable geological bodies. The technical aspects of my paper systematically studied the advanced geological exploration theory and application aspects for complicated tunnels, with discussion on the key technical points and useful conclusions. For the all-aroundness and accuracy of advanced geological exploration results, the objective of my paper is targeted on the comprehensive examination on the structure and hydrous characteristic of the unfavorable geological bodies in complicated tunnels. By the multi-component seismic modeling on a more real model containing the air medium, the wave field response characteristics of unfavorable geological bodies can be analyzed, thus providing theoretical foundation for the observation system layout, signal processing and interpretation of seismic methods. Based on the tomographic imaging theory of seismic and electromagnetic method, 2D integrated seismic and electromagnetic tomographic imaging and visualization software was designed and applied in the advanced drilling hole in the tunnel face, after validation of the forward and inverse modeling results on theoretical models. The transmission wave imaging technology introduced in my paper can be served as a new criterion for detection of unfavorable geological bodies. After careful study on the basic theory, data processing and interpretation, practical applications of TSP and ground penetrating radar (GPR) method, as well as serious examination on their application examples, my paper formulated a suite of comprehensive application system of seismic and electromagnetic methods for the advanced geological exploration of complicated tunnels. This research is funded by National Natural Science Foundation of China (Grant No. 41202223) .
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.
Effects of seismic devices on transverse responses of piers in the Sutong Bridge
NASA Astrophysics Data System (ADS)
Shen, Xing; Camara, Alfredo; Ye, Aijun
2015-12-01
The Sutong Bridge in China opened to traffic in 2008, and is an arterial connection between the cities of Nantong and Suzhou. It is a cable-stayed bridge with a main span of 1,088 m. Due to a tight construction schedule and lack of suitable seismic devices at the time, fixed supports were installed between the piers and the girder in the transverse direction. As a result, significant transverse seismic forces could occur in the piers and foundations, especially during a return period of a 2500-year earthquake. Therefore, the piers, foundations and fixed bearings had to be designed extraordinarily strong. However, when larger earthquakes occur, the bearings, piers and foundations are still vulnerable. The recent rapid developments in seismic technology and the performance-based design approach offer a better opportunity to optimize the transverse seismic design for the Sutong Bridge piers. The optimized design can be applied to the Sutong Bridge (as a retrofit), as well as other bridges. Seismic design alternatives utilizing viscous fluid dampers (VFD), or friction pendulum sliding bearings (FPSB), or transverse yielding metallic dampers (TYMD) are thoroughly studied in this work, and the results are compared with those from the current condition with fixed transverse supports and a hypothetical condition in which only sliding bearings are provided on top of the piers (the girder can move "freely" in the transverse direction during the earthquake, except for frictional forces of the sliding bearings). Parametric analyses were performed to optimize the design of these proposed seismic devices. From the comparison of the peak bridge responses in these configurations, it was found that both VFD and TYMD are very effective in the reduction of transverse seismic forces in piers, while at the same time keeping the relative transverse displacements between piers and the box girder within acceptable limits. However, compared to VFD, TYMD do not interact with the longitudinal displacements of the girder, and have simpler details and lower initial and maintenance costs. Although the use of FPSB can also reduce seismic forces, it generally causes the transverse relative displacements to be higher than acceptable limits.
Seismic methods are the most commonly conducted geophysical surveys for engineering investigations. Seismic refraction provides engineers and geologists with the most basic of geologic data via simple procedures with common equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlangga, Mokhammad Puput
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less
A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure
1989-04-14
element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial
Borehole prototype for seismic high-resolution exploration
NASA Astrophysics Data System (ADS)
Giese, Rüdiger; Jaksch, Katrin; Krauß, Felix; Krüger, Kay; Groh, Marco; Jurczyk, Andreas
2014-05-01
Target reservoirs for the exploitation of hydrocarbons or hot water for geothermal energy supply can comprise small layered structures, for instance thin layers or faults. The resolution of 2D and 3D surface seismic methods is often not sufficient to determine and locate these structures. Borehole seismic methods like vertical seismic profiling (VSP) and seismic while drilling (SWD) use either receivers or sources within the borehole. Thus, the distance to the target horizon is reduced and higher resolution images of the geological structures can be achieved. Even these methods are limited in their resolution capabilities with increasing target depth. To localize structures more accuracy methods with higher resolution in the range of meters are necessary. The project SPWD -- Seismic Prediction While Drilling aims at s the development of a borehole prototype which combines seismic sources and receivers in one device to improve the seismic resolution. Within SPWD such a prototype has been designed, manufactured and tested. The SPWD-wireline prototype is divided into three main parts. The upper section comprises the electronic unit. The middle section includes the upper receiver, the upper clamping unit as well as the source unit and the lower clamping unit. The lower section consists of the lower receiver unit and the hydraulic unit. The total length of the prototype is nearly seven meters and its weight is about 750 kg. For focusing the seismic waves in predefined directions of the borehole axis the method of phased array is used. The source unit is equipped with four magnetostrictive vibrators. Each can be controlled independently to get a common wave front in the desired direction of exploration. Source signal frequencies up to 5000 Hz are used, which allows resolutions up to one meter. In May and September 2013 field tests with the SPWD-wireline prototype have been carried out at the KTB Deep Crustal Lab in Windischeschenbach (Bavaria). The aim was to proof the pressure-tightness and the functionality of the hydraulic system components of the borehole device. To monitor the prototype four cameras and several moisture sensors were installed along the source and receiver units close to the extendable coupling stamps where an infiltration of fluid is most probably. The tests lasted about 48 hours each. It was possible to extend and to retract the coupling stamps of the prototype up to a depth of 2100 m. No infiltration of borehole fluids in the SPWD-tool was observed. In preparation of the acoustic calibration measurements in the research and education mine of the TU Bergakademie Freiberg seismic sources and receivers as well as the recording electronic devices were installed in the SPWD-wireline prototype at the GFZ. Afterwards, the SPWD-borehole device was transported to the GFZ-Underground-Lab and preliminary test measurements to characterize the radiation pattern characteristics have been carried out in the newly drilled vertical borehole in December 2013. Previous measurements with a laboratory borehole prototype have demonstrated a dependency of the radiated seismic energy from the predefined amplification direction, the wave type and the signal frequencies. SPWD is funded by the German Federal Environment Ministry
NASA Astrophysics Data System (ADS)
Brunt, M. R.; Ellins, K. K.; Frohlich, C. A.
2011-12-01
In 2008, during my participation in the NSF-sponsored Texas Earth & Space Science (TXESS) Revolution professional development program, I was awarded an AS-1 seismograph through IRIS's Seismographs in Schools Program. This program serves to create an international educational seismic network that allows teachers across the country and around the world to share seismic data in real-time using online tools, classroom activities, and technical support documents for seismic instruments. Soon after receiving my AS-1, I founded and began sponsoring the Eagle Pass Jr. High Seismology Team which consists of selected 7th and 8th grade students. Eagle Pass Jr. High is a Title 1 school that serves a predominantly "at-risk" Hispanic population. We meet after school once a week to learn about earthquakes, seismic waves, analyze recorded seismic event data using computer software programming, and correspond with other students from schools around the country. This team approach has been well received by fellow TXESS Revolution teachers with AS-1 seismographs and will be implemented by David Boyd, STEM coordinator for Williams Preparatory Academy in Dallas, Texas this fall 2011. All earthquakes recorded by our seismograph station (EPTX), which has remained online and actively recording seismic data since 2008, are catalogued and then plotted on a large world map displayed on my classroom wall. A real-time seismogram image updates every five minutes and along with all earthquakes recorded since installation can be viewed on our webpage http://www.iris.edu/hq/ssn/schools/view/eptx. During the 2010-2011 school year, my seismology team and I participated in an earthquake research study led by Dr. Cliff Frohlich at the Institute for Geophysics. The study examined seismograms and felt reports for the 25 April 2010 Alice, Texas, earthquake, in order to investigate its possible connection to oil and gas production in the Stratton oil and gas field. A research paper detailing our findings has been submitted for publication in the Bulletin of the Seismological Society of America. Most recently, I was one of 15 teachers selected for a summer seismic methods workshop at UT-Austin offered by Dr. Clark Wilson. We conducted field seismic imaging, field shear wave velocity measurements for geotechnical earthquake engineering design, data reduction, and science curriculum design. I plan to incorporate these seismic methods concepts into my school seismology team program. Since my participation in the TXESS Revolution I have been blessed with opportunities that I never could have imagined. As a teacher, these experiences increased my knowledge and skills, provided tools and resources, and enabled me to create authentic research experiences for my students that promote teamwork and teach the nature of science.
Development and Performance of the Alaska Transportable Array Posthole Broadband Seismic Station
NASA Astrophysics Data System (ADS)
Aderhold, K.; Enders, M.; Miner, J.; Bierma, R. M.; Bloomquist, D.; Theis, J.; Busby, R. W.
2017-12-01
The final stations of the Alaska Transportable Array (ATA) will be constructed in 2017, completing the full footprint of 280 new and existing broadband seismic stations stretching across 19 degrees of latitude from western Alaska to western Canada. Through significant effort in planning, site reconnaissance, permitting and the considerable and concerted effort of field crews, the IRIS Alaska TA team is on schedule to successfully complete the construction of 194 new stations and upgrades at 28 existing stations over four field seasons. The station design and installation method was developed over the course of several years, leveraging the experience of the L48 TA deployments and existing network operators in Alaska as well as incorporating newly engineered components and procedures. A purpose-built lightweight drill was designed and fabricated to facilitate the construction of shallow boreholes to incorporate newly available posthole seismometers. This allowed for the development of a streamlined system of procedures to manufacture uniform seismic stations with minimal crew and minimal time required at each station location. A new station can typically be constructed in a single day with a four-person field crew. The ATA utilizes a hammer-drilled, cased posthole emplacement method adapted to the remote and harsh working environment of Alaska. The same emplacement design is implemented in all ground conditions to preserve uniformity across the array and eliminate the need for specialized mechanical equipment. All components for station construction are ideally suited for transport via helicopter, and can be adapted to utilize more traditional methods of transportation when available. This emplacement design delivers high quality data when embedded in bedrock or permafrost, reaching the low noise levels of benchmark permanent global broadband stations especially at long periods over 70 seconds. The TA will operate the network of real-time stations through at least 2019, with service trips planned on a "as needed" basis to continue providing greater than 95% data return.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Development of the Multi-Level Seismic Receiver (MLSR)
NASA Astrophysics Data System (ADS)
Sleefe, G. E.; Engler, B. P.; Drozda, P. M.; Franco, R. J.; Morgan, Jeff
1995-02-01
The Advanced Geophysical Technology Department (6114) and the Telemetry Technology Development Department (2664) have, in conjunction with the Oil Recovery Technology Partnership, developed a Multi-Level Seismic Receiver (MLSR) for use in crosswell seismic surveys. The MLSR was designed and evaluated with the significant support of many industry partners in the oil exploration industry. The unit was designed to record and process superior quality seismic data operating in severe borehole environments, including high temperature (up to 200 C) and static pressure (10,000 psi). This development has utilized state-of-the-art technology in transducers, data acquisition, and real-time data communication and data processing. The mechanical design of the receiver has been carefully modeled and evaluated to insure excellent signal coupling into the receiver.
Signal-to-noise ratio application to seismic marker analysis and fracture detection
NASA Astrophysics Data System (ADS)
Xu, Hui-Qun; Gui, Zhi-Xian
2014-03-01
Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
Currently the Department of Energy (DOE) and the nuclear industry perform seismic soil-structure interaction (SSI) analysis using equivalent linear numerical analysis tools. For lower levels of ground motion, these tools should produce reasonable in-structure response values for evaluation of existing and new facilities. For larger levels of ground motion these tools likely overestimate the in-structure response (and therefore structural demand) since they do not consider geometric nonlinearities (such as gaping and sliding between the soil and structure) and are limited in the ability to model nonlinear soil behavior. The current equivalent linear SSI (SASSI) analysis approach either joins the soilmore » and structure together in both tension and compression or releases the soil from the structure for both tension and compression. It also makes linear approximations for material nonlinearities and generalizes energy absorption with viscous damping. This produces the potential for inaccurately establishing where the structural concerns exist and/or inaccurately establishing the amplitude of the in-structure responses. Seismic hazard curves at nuclear facilities have continued to increase over the years as more information has been developed on seismic sources (i.e. faults), additional information gathered on seismic events, and additional research performed to determine local site effects. Seismic hazard curves are used to develop design basis earthquakes (DBE) that are used to evaluate nuclear facility response. As the seismic hazard curves increase, the input ground motions (DBE’s) used to numerically evaluation nuclear facility response increase causing larger in-structure response. As ground motions increase so does the importance of including nonlinear effects in numerical SSI models. To include material nonlinearity in the soil and geometric nonlinearity using contact (gaping and sliding) it is necessary to develop a nonlinear time domain methodology. This methodology will be known as, NonLinear Soil-Structure Interaction (NLSSI). In general NLSSI analysis should provide a more accurate representation of the seismic demands on nuclear facilities their systems and components. INL, in collaboration with a Nuclear Power Plant Vender (NPP-V), will develop a generic Nuclear Power Plant (NPP) structural design to be used in development of the methodology and for comparison with SASSI. This generic NPP design has been evaluated for the INL soil site because of the ease of access and quality of the site specific data. It is now being evaluated for a second site at Vogtle which is located approximately 15 miles East-Northeast of Waynesboro, Georgia and adjacent to Savanna River. The Vogtle site consists of many soil layers spanning down to a depth of 1058 feet. The reason that two soil sites are chosen is to demonstrate the methodology across multiple soil sites. The project will drive the models (soil and structure) using successively increasing acceleration time histories with amplitudes. The models will be run in time domain codes such as ABAQUS, LS-DYNA, and/or ESSI and compared with the same models run in SASSI. The project is focused on developing and documenting a method for performing time domain, non-linear seismic soil structure interaction (SSI) analysis. Development of this method will provide the Department of Energy (DOE) and industry with another tool to perform seismic SSI analysis.« less
Safety Identifying of Integral Abutment Bridges under Seismic and Thermal Loads
Easazadeh Far, Narges; Barghian, Majid
2014-01-01
Integral abutment bridges (IABs) have many advantages over conventional bridges in terms of strength and maintenance cost. Due to the integrity of these structures uniform thermal and seismic loads are known important ones on the structure performance. Although all bridge design codes consider temperature and earthquake loads separately in their load combinations for conventional bridges, the thermal load is an “always on” load and, during the occurrence of an earthquake, these two important loads act on bridge simultaneously. Evaluating the safety level of IABs under combination of these loads becomes important. In this paper, the safety of IABs—designed by AASHTO LRFD bridge design code—under combination of thermal and seismic loads is studied. To fulfill this aim, first the target reliability indexes under seismic load have been calculated. Then, these analyses for the same bridge under combination of thermal and seismic loads have been repeated and the obtained reliability indexes are compared with target indexes. It is shown that, for an IAB designed by AASHTO LRFD, the indexes have been reduced under combined effects. So, the target level of safety during its design life is not provided and the code's load combination should be changed. PMID:25405232
Blind tests of methods for InSight Mars mission: Open scientific challenge
NASA Astrophysics Data System (ADS)
Clinton, John; Ceylan, Savas; Giardini, Domenico; Khan, Amir; van Driel, Martin; Böse, Maren; Euchner, Fabian; Garcia, Raphael F.; Drilleau, Mélanie; Lognonné, Philippe; Panning, Mark; Banerdt, Bruce
2017-04-01
The Marsquake Service (MQS) will be the ground segment service within the InSight mission to Mars, which will deploy a single seismic station on Elysium Planitia in November 2018. The main tasks of the MQS are the identification and characterisation of seismicity, and managing the Martian seismic event catalogue. In advance of the mission, we have developed a series of single station event location methods that rely on a priori 1D and 3D structural models. In coordination with the Mars Structural Service, we expect to use iterative inversion techniques to revise these structural models and event locations. In order to seek methodological advancements and test our current approaches, we have designed a blind test case using Martian synthetics combined with realistic noise models for the Martian surface. We invite all scientific parties that are interested in single station approaches and in exploring the Martian time-series to participate and contribute to our blind test. We anticipate the test will can improve currently developed location and structural inversion techniques, and also allow us explore new single station techniques for moment tensor and magnitude determination. The waveforms for our test case are computed employing AxiSEM and Instaseis for a randomly selected 1D background model and event catalogue that is statistically consistent with our current expectation of Martian seismicity. Realistic seismic surface noise is superimposed to generate a continuous time-series spanning 6 months. The event catalog includes impacts as well as Martian quakes. The temporal distribution of the seismicity in the timeseries, as well as the true structural model, are not be known to any participating parties including MQS till the end of competition. We provide our internal tools such as event location codes, suite of background models, seismic phase travel times, in order to support researchers who are willing to use/improve our current methods. Following the deadline of our blind test in late 2017, we plan to combine all outcomes in an article with all participants as co-authors.
NASA Astrophysics Data System (ADS)
Watkins, M.; Busby, R.; Rico, H.; Johnson, M.; Hauksson, E.
2003-12-01
We provide enhanced network robustness by apportioning redundant data communications paths for seismic stations in the field. By providing for more than one telemetry route, either physical or logical, network operators can improve availability of seismic data while experiencing occasional network outages, and also during the loss of key gateway interfaces such as a router or central processor. This is especially important for seismic stations in sparsely populated regions where a loss of a single site may result in a significant gap in the network's monitoring capability. A number of challenges arise in the application of a circuit-detour mechanism. One requirement is that it fits well within the existing framework of our real-time system processing. It is also necessary to craft a system that is not needlessly complex to maintain or implement, particularly during a crisis. The method that we use for circuit-detours does not require the reconfiguration of dataloggers or communications equipment in the field. Remote network configurations remain static, changes are only required at the central site. We have implemented standardized procedures to detour circuits on similar transport mediums, such as virtual circuits on the same leased line; as well as physically different communications pathways, such as a microwave link backed up by a leased line. The lessons learned from these improvements in reliability, and optimization efforts could be applied to other real-time seismic networks. A fundamental tenant of most seismic networks is that they are reliable and have a high percentage of real-time data availability. A reasonable way to achieve these expectations is to provide alternate means of delivering data to the central processing sites, with a simple method for utilizing these alternate paths.
NASA Astrophysics Data System (ADS)
Benjumea, Beatriz; Macau, Albert; Gabàs, Anna; Figueras, Sara
2016-04-01
We combine geophysical well logging and passive seismic measurements to characterize the near-surface geology of an area located in Hontomin, Burgos (Spain). This area has some near-surface challenges for a geophysical study. The irregular topography is characterized by limestone outcrops and unconsolidated sediments areas. Additionally, the near-surface geology includes an upper layer of pure limestones overlying marly limestones and marls (Upper Cretaceous). These materials lie on top of Low Cretaceous siliciclastic sediments (sandstones, clays, gravels). In any case, a layer with reduced velocity is expected. The geophysical data sets used in this study include sonic and gamma-ray logs at two boreholes and passive seismic measurements: three arrays and 224 seismic stations for applying the horizontal-to-vertical amplitude spectra ratio method (H/V). Well-logging data define two significant changes in the P-wave-velocity log within the Upper Cretaceous layer and one more at the Upper to Lower Cretaceous contact. This technique has also been used for refining the geological interpretation. The passive seismic measurements provide a map of sediment thickness with a maximum of around 40 m and shear-wave velocity profiles from the array technique. A comparison between seismic velocity coming from well logging and array measurements defines the resolution limits of the passive seismic techniques and helps it to be interpreted. This study shows how these low-cost techniques can provide useful information about near-surface complexity that could be used for designing a geophysical field survey or for seismic processing steps such as statics or imaging.
Seismic design guidelines for highway bridges
NASA Astrophysics Data System (ADS)
Mayes, R. L.; Sharpe, R. L.
1981-10-01
Guidelines for the seismic design of highway bridges are given. The guidelines are the recommendations of a team of nationally recognized experts which included consulting engineers, academicians, State highway, and Federal agency representatives from throughout the United States. The guidelines are comprehensive in nature and they embody several new concepts which are significant departures from existing design provisions. An extensive commentary documenting the basis for the guidelines and an example demonstrating their use are included. A draft of the guidelines was used to seismically redesign twenty-one bridges. A summary of the redesigns is included.
Towards a first design of a Newtonian-noise cancellation system for Advanced LIGO
NASA Astrophysics Data System (ADS)
Coughlin, M.; Mukund, N.; Harms, J.; Driggers, J.; Adhikari, R.; Mitra, S.
2016-12-01
Newtonian gravitational noise from seismic fields is predicted to be a limiting noise source at low frequency for second generation gravitational-wave detectors. Mitigation of this noise will be achieved by Wiener filtering using arrays of seismometers deployed in the vicinity of all test masses. In this work, we present optimized configurations of seismometer arrays using a variety of simplified models of the seismic field based on seismic observations at LIGO Hanford. The model that best fits the seismic measurements leads to noise reduction limited predominantly by seismometer self-noise. A first simplified design of seismic arrays for Newtonian-noise cancellation at the LIGO sites is presented, which suggests that it will be sufficient to monitor surface displacement inside the buildings.
Opto-mechanical lab-on-fibre seismic sensors detected the Norcia earthquake.
Pisco, Marco; Bruno, Francesco Antonio; Galluzzo, Danilo; Nardone, Lucia; Gruca, Grzegorz; Rijnveld, Niek; Bianco, Francesca; Cutolo, Antonello; Cusano, Andrea
2018-04-27
We have designed and developed lab-on-fibre seismic sensors containing a micro-opto-mechanical cavity on the fibre tip. The mechanical cavity is designed as a double cantilever suspended on the fibre end facet and connected to a proof mass to tune its response. Ground acceleration leads to displacement of the cavity length, which in turn can be remotely detected using an interferometric interrogation technique. After the sensors characterization, an experimental validation was conducted at the Italian National Institute of Geophysics and Volcanology (INGV), which is responsible for seismic surveillance over the Italian country. The fabricated sensors have been continuously used for long periods to demonstrate their effectiveness as seismic accelerometer sensors. During the tests, fibre optic seismic accelerometers clearly detected the seismic sequence that culminated in the severe Mw6.5 Norcia earthquake that struck central Italy on October 30, 2016. The seismic data provided by the optical sensors were analysed by specialists at the INGV. The wave traces were compared with state-of-the-art traditional sensors typically incorporated into the INGV seismic networks. The comparison verifies the high fidelity of the optical sensors in seismic wave detection, indicating their suitability for a novel class of seismic sensors to be employed in practical scenarios.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-19
... the NRC to assess the adequacy of proposed seismic design bases and the design bases for other site..., designed, constructed, and maintained to withstand geologic hazards, such as faulting, seismic hazards, and... potential man-made hazards will be appropriately accounted for in the design of nuclear power and test...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 76-DESIGN AND CONSTRUCTION Design and Construction... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What seismic safety standards must Federal agencies follow in the design and construction of Federal facilities? 102-76.30...
Code of Federal Regulations, 2011 CFR
2011-01-01
... (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 76-DESIGN AND CONSTRUCTION Design and Construction... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What seismic safety standards must Federal agencies follow in the design and construction of Federal facilities? 102-76.30...
Code of Federal Regulations, 2014 CFR
2014-01-01
... (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 76-DESIGN AND CONSTRUCTION Design and Construction... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What seismic safety standards must Federal agencies follow in the design and construction of Federal facilities? 102-76.30...
Code of Federal Regulations, 2012 CFR
2012-01-01
... (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 76-DESIGN AND CONSTRUCTION Design and Construction... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false What seismic safety standards must Federal agencies follow in the design and construction of Federal facilities? 102-76.30...
Code of Federal Regulations, 2010 CFR
2010-07-01
... (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 76-DESIGN AND CONSTRUCTION Design and Construction... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What seismic safety standards must Federal agencies follow in the design and construction of Federal facilities? 102-76.30...
Seismic facies analysis based on self-organizing map and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian
2015-01-01
Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulsson, Bjorn N.P.
2015-02-28
To address the critical site characterization and monitoring needs for CCS programs, US Department of Energy (DOE) awarded Paulsson, Inc. in 2010 a contract to design, build and test a fiber optic based ultra-large bandwidth clamped borehole seismic vector array capable of deploying up to one thousand 3C sensor pods suitable for deployment into high temperature and high pressure boreholes. Paulsson, Inc. has completed a design or a unique borehole seismic system consisting of a novel drill pipe based deployment system that includes a hydraulic clamping mechanism for the sensor pods, a new sensor pod design and most important –more » a unique fiber optic seismic vector sensor with technical specifications and capabilities that far exceed the state of the art seismic sensor technologies. These novel technologies were all applied to the new borehole seismic system. In combination these technologies will allow for the deployment of up to 1,000 3C sensor pods in vertical, deviated or horizontal wells. Laboratory tests of the fiber optic seismic vector sensors developed during this project have shown that the new borehole seismic sensor technology is capable of generating outstanding high vector fidelity data with extremely large bandwidth: 0.01 – 6,000 Hz. Field tests have shown that the system can record events at magnitudes much smaller than M-2.3 at frequencies up to 2,000 Hz. The sensors have also proved to be about 100 times more sensitive than the regular coil geophones that are used in borehole seismic systems today. The fiber optic seismic sensors have furthermore been qualified to operate at temperatures over 300°C (572°F). The fibers used for the seismic sensors in the system are used to record Distributed Temperature Sensor (DTS) data allowing additional value added data to be recorded simultaneously with the seismic vector sensor data.« less
Combined seismic plus live-load analysis of highway bridges.
DOT National Transportation Integrated Search
2011-10-01
"The combination of seismic and vehicle live loadings on bridges is an important design consideration. There are well-established design : provisions for how the individual loadings affect bridge response: structural components that carry vertical li...
NASA Astrophysics Data System (ADS)
Pucinotti, Raffaele; Ferrario, Fabio; Bursi, Oreste S.
2008-07-01
A multi-objective advanced design methodology dealing with seismic actions followed by fire on steel-concrete composite full strength joints with concrete filled tubes is proposed in this paper. The specimens were designed in detail in order to exhibit a suitable fire behaviour after a severe earthquake. The major aspects of the cyclic behaviour of composite joints are presented and commented upon. The data obtained from monotonic and cyclic experimental tests have been used to calibrate a model of the joint in order to perform seismic simulations on several moment resisting frames. A hysteretic law was used to take into account the seismic degradation of the joints. Finally, fire tests were conducted with the objective to evaluate fire resistance of the connection already damaged by an earthquake. The experimental activity together with FE simulation demonstrated the adequacy of the advanced design methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesigur, Haluk; Cili, Feridun
Seismic isolation is an effective design strategy to mitigate the seismic hazard wherein the structure and its contents are protected from the damaging effects of an earthquake. This paper presents the Hangar Project in Sabiha Goekcen Airport which is located in Istanbul, Turkey. Seismic isolation system where the isolation layer arranged at the top of the columns is selected. The seismic hazard analysis, superstructure design, isolator design and testing were based on the Uniform Building Code (1997) and met all requirements of the Turkish Earthquake Code (2007). The substructure which has the steel vertical trusses on facades and RC Hmore » shaped columns in the middle axis of the building was designed with an R factor limited to 2.0 in accordance with Turkish Earthquake Code. In order to verify the effectiveness of the isolation system, nonlinear static and dynamic analyses are performed. The analysis revealed that isolated building has lower base shear (approximately 1/4) against the non-isolated structure.« less
Initial results from seismic monitoring at the Aquistore CO 2 storage site, Saskatchewan, Canada
White, D. J.; Roach, L. A.N.; Roberts, B.; ...
2014-12-31
The Aquistore Project, located near Estevan, Saskatchewan, is one of the first integrated commercial-scale CO 2 storage projects in the world that is designed to demonstrate CO 2 storage in a deep saline aquifer. Starting in 2014, CO 2 captured from the nearby Boundary Dam coal-fired power plant will be transported via pipeline to the storage site and to nearby oil fields for enhanced oil recovery. At the Aquistore site, the CO 2 will be injected into a brine-filled sandstone formation at ~3200 m depth using the deepest well in Saskatchewan. The suitability of the geological formations that will hostmore » the injected CO 2 has been predetermined through 3D characterization using high-resolution 3D seismic images and deep well information. These data show that 1) there are no significant faults in the immediate area of the storage site, 2) the regional sealing formation is continuous in the area, and 3) the reservoir is not adversely affected by knolls on the surface of the underlying Precambrian basement. Furthermore, the Aquistore site is located within an intracratonic region characterized by extremely low levels of seismicity. This is in spite of oil-field related water injection in the nearby Weyburn-Midale field where a total of 656 million m 3 of water have been injected since the 1960`s with no demonstrable related induced seismicity. A key element of the Aquistore research program is the further development of methods to monitor the security and subsurface distribution of the injected CO 2. Toward this end, a permanent areal seismic monitoring array was deployed in 2012, comprising 630 vertical-component geophones installed at 20 m depth on a 2.5x2.5 km regular grid. This permanent array is designed to provide improved 3D time-lapse seismic imaging for monitoring subsurface CO 2. Prior to the onset of CO 2 injection, calibration 3D surveys were acquired in May and November of 2013. Comparison of the data from these surveys relative to the baseline 3D survey data from 2012 shows excellent repeatability (NRMS less than 10%) which will provide enhanced monitoring sensitivity to smaller amounts of CO 2. The permanent array also provides continuous passive monitoring for injection-related microseismicity. Passive monitoring has been ongoing since the summer of 2012 in order to establish levels of background seismicity before CO 2 injection starts in 2014. Microseismic monitoring was augmented in 2013 by the installation of 3 broadband seismograph stations surrounding the Aquistore site. These surface installations should provide a detection capability of seismic events with magnitudes as low as ~0. Downhole seismic methods are also being utilized for CO 2 monitoring at the Aquistore site. Baseline crosswell tomographic images depict details (meters-scale) of the reservoir in the 150-m interval between the observation and injection wells. This level of resolution is designed to track the CO 2 migration between the wells during the initial injection period. A baseline 3D vertical seismic profile (VSP) was acquired in the fall of 2013 to provide seismic images with resolution on a scale between that provided by the surface seismic array and the downhole tomography. The 3D VSP was recorded simultaneously using both a conventional array of downhole geophones (60-levels) and an optical fibre system. The latter utilized an optical fiber cable deployed on the outside of the monitor well casing and cemented in place. A direct comparison of these two methodologies will determine the suitability of using the fiber cable for ongoing time-lapse VSP monitoring.« less
NASA Astrophysics Data System (ADS)
Grandjean, Gilles; Leparoux, Donatienne
2004-06-01
One of the recurring problems in civil engineering and landscape management is the detection of natural and man-made cavities in order to mitigate the problems of collapse and subsurface subsidence. In general, the position of the cavities is not known, either because they are not recorded in a database or because location maps are not available. In such cases, geophysical methods can provide an effective alternative for cavity detection, particularly ground-penetrating radar (GPR) and seismic methods, for which pertinent results have been recently obtained. Many studies carried out under real conditions have revealed that the signatures derived from interaction between seismic signals and voids are affected by complex geology, thus making them difficult to interpret. We decided to analyze this interaction under physical conditions as simple as possible, i.e., at a test site built specifically for that purpose. The test site was constructed of a homogeneous material and a void-equivalent body so that the ratio between wavelength and heterogeneity size was compatible with that encountered in reality. Numerical modeling was initially used to understand wave interaction with the body, prior to the design of various data-processing protocols. P-wave imagery and surface-wave sections were then acquired and processed. The work involved in this experiment and the associated results are presented, followed by a discussion concerning the reliability of such a study, and its consequences for future seismic projects.
The Wenchuan, China M8.0 Earthquake: A Lesson and Implication for Seismic Hazard Mitigation
NASA Astrophysics Data System (ADS)
Wang, Z.
2008-12-01
The Wenchuan, China M8.0 earthquake caused great damage and huge casualty. 69,197 people were killed, 374,176 people were injured, and 18,341 people are still missing. The estimated direct economic loss is about 126 billion U.S. dollar. The Wenchuan earthquake again demonstrated that earthquake does not kill people, but the built environments and induced hazards, landslides in particular, do. Therefore, it is critical to strengthen the built environments, such buildings and bridges, and to mitigate the induced hazards in order to avoid such disaster. As a part of the so-called North-South Seismic Zone in China, the Wenchuan earthquake occurred along the Longmen Shan thrust belt which forms a boundary between the Qinghai-Tibet Plateau and the Sichuan basin, and there is a long history (~4,000 years) of seismicity in the area. The historical records show that the area experienced high intensity (i.e., greater than IX) in the past several thousand years. In other words, the area is well-known to have high seismic hazard because of its tectonic setting and seismicity. However, only intensity VII (0.1 to 0.15g PGA) has been considered for seismic design for the built environments in the area. This was one of the main reasons that so many building collapses, particularly the school buildings, during the Wenchuan earthquake. It is clear that the seismic design (i.e., the design ground motion or intensity) is not adequate in the Wenchuan earthquake stricken area. A lesson can be learned from the Wenchuan earthquake on the seismic hazard and risk assessment. A lesson can also be learned from this earthquake on seismic hazard mitigation and/or seismic risk reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zucca, J J; Walter, W R; Rodgers, A J
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring andmore » seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags contemporary research. Current monitoring practice is to use relatively simple Earth models that generally afford analytical prediction of seismic observables (see Examples of Current Monitoring Practice below). Empirical relationships or corrections to predictions are often used to account for unmodeled phenomena, such as the generation of S-waves from explosions or the effect of 3-dimensional Earth structure on wave propagation. This approach produces fast and accurate predictions in areas where empirical observations are available. However, accuracy may diminish away from empirical data. Further, much of the physics is wrapped into an empirical relationship or correction, which limits the ability to fully understand the physical processes underlying the seismic observation. Every generation of seismology researchers works toward quantitative results, with leaders who are active at or near the forefront of what has been computationally possible. While recognizing that only a 3-dimensional model can capture the full physics of seismic wave generation and propagation in the Earth, computational seismology has, until recently, been limited to simplifying model parameterizations (e.g. 1D Earth models) that lead to efficient algorithms. What is different today is the fact that the largest and fastest machines are at last capable of evaluating the effects of generalized 3D Earth structure, at levels of detail that improve significantly over past efforts, with potentially wide application. Advances in numerical methods to compute travel times and complete seismograms for 3D models are enabling new ways to interpret available data. This includes algorithms such as the Fast Marching Method (Rawlison and Sambridge, 2004) for travel time calculations and full waveform methods such as the spectral element method (SEM; Komatitsch et al., 2002, Tromp et al., 2005), higher order Galerkin methods (Kaser and Dumbser, 2006; Dumbser and Kaser, 2006) and advances in more traditional Cartesian finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less
Broadening the Quality and Capabilities of the EarthScope Alaska Transportable Array
NASA Astrophysics Data System (ADS)
Busby, R. W.
2016-12-01
In 2016, the EarthScope Transportable Array (TA) program will have 195 broadband seismic stations operating in Alaska and western Canada. This ambitious project will culminate in a network of 268 new or upgraded real-time seismic stations operating through 2019. The challenging environmental conditions and the remoteness of Alaska have motivated a new method for constructing a high-quality, temporary seismic network. The Alaska TA station design builds on experience of the Lower 48 TA deployment and adds design requirements because most stations are accessible only by helicopter. The stations utilize new high-performance posthole sensors, a specially built hammer/auger drill, and lightweight lithium ion batteries to minimize sling loads. A uniform station design enables a modest crew to build the network on a short timeline and operate them through the difficult conditions of rural Alaska. The Alaska TA deployment has increased the quality of seismic data, with some well-sited 2-3 m posthole stations approaching the performance of permanent Global Seismic Network stations emplaced in 100 m boreholes. The real-time data access, power budget, protective enclosure and remote logistics of these TA stations has attracted collaborations with NASA, NOAA, USGS, AVO and other organizations to add auxiliary sensors to the suite of instruments at many TA stations. Strong motion sensors have been added to (18) stations near the subduction trench to complement SM stations operated by AEC, ANSS and GSN. All TA and most upgraded stations have pressure and infrasound sensors, and 150 TA stations are receiving a Vaisala weather sensor, supplied by the National Weather Service Alaska Region and NASA, capable of measuring temperature, pressure, relative humidity, wind speed/direction, and precipitation intensity. We are also installing about (40) autonomous soil temperature profile kits adjacent to northern stations. While the priority continues to be collecting seismic data, these additional strong motion, atmospheric, and soil temperature sensors may motivate the desire extend the operation of certain stations in cooperation with these organizations. The TA has always been amenable to partnerships in the research and education communities that extend the capabilities and reach of the EarthScope Transportable Array.
Rippability Assessment of Weathered Sedimentary Rock Mass using Seismic Refraction Methods
NASA Astrophysics Data System (ADS)
Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.
2018-04-01
Rippability or ease of excavation in sedimentary rocks is a significant aspect of the preliminary work of any civil engineering project. Rippability assessment was performed in this study to select an available ripping machine to rip off earth materials using the seismic velocity chart provided by Caterpillar. The research area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. The research was aimed at obtaining seismic velocity, P-wave (Vp) using a seismic refraction method to produce a 2D tomography model. A 2D seismic model was used to delineate the layers into the velocity profile. The conventional geotechnical method of using a borehole was integrated with the seismic velocity method to provide appropriate correlation. The correlated data can be used to categorize machineries for excavation activities based on the available systematic analysis procedure to predict rock rippability. The seismic velocity profile obtained was used to interpret rock layers within the ranges labelled as rippable, marginal, and non-rippable. Based on the seismic velocity method the site can be classified into loose sand stone to moderately weathered rock. Laboratory test results shows that the site’s rock material falls between low strength and high strength. Results suggest that Caterpillar’s smallest ripper, namely, D8R, can successfully excavate materials based on the test results integration from seismic velocity method and laboratory test.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Pan, Peng; Gong, Runhua; Wang, Tao; Xue, Weichen
2017-10-01
An online hybrid test was carried out on a 40-story 120-m high concrete shear wall structure. The structure was divided into two substructures whereby a physical model of the bottom three stories was tested in the laboratory and the upper 37 stories were simulated numerically using ABAQUS. An overlapping domain method was employed for the bottom three stories to ensure the validity of the boundary conditions of the superstructure. Mixed control was adopted in the test. Displacement control was used to apply the horizontal displacement, while two controlled force actuators were applied to simulate the overturning moment, which is very large and cannot be ignored in the substructure hybrid test of high-rise buildings. A series of tests with earthquake sources of sequentially increasing intensities were carried out. The test results indicate that the proposed hybrid test method is a solution to reproduce the seismic response of high-rise concrete shear wall buildings. The seismic performance of the tested precast high-rise building satisfies the requirements of the Chinese seismic design code.
Accurately determining direction of arrival by seismic array based on compressive sensing
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Yu, H.
2016-12-01
Seismic array analysis method plays an important role in detecting weak signals and determining their locations and rupturing process. In these applications, reliably estimating direction of arrival (DOA) for the seismic wave is very important. DOA is generally determined by the conventional beamforming method (CBM) [Rost et al, 2000]. However, for a fixed seismic array generally the resolution of CBM is poor in the case of low-frequency seismic signals, and in the case of high frequency seismic signals the CBM may produce many local peaks, making it difficult to pick the one corresponding to true DOA. In this study, we develop a new seismic array method based on compressive sensing (CS) to determine the DOA with high resolution for both low- and high-frequency seismic signals. The new method takes advantage of the space sparsity of the incoming wavefronts. The CS method has been successfully used to determine spatial and temporal earthquake rupturing distributions with seismic array [Yao et al, 2011;Yao et al, 2013;Yin 2016]. In this method, we first form the problem of solving the DOA as a L1-norm minimization problem. The measurement matrix for CS is constructed by dividing the slowness-angle domain into many grid nodes, which needs to satisfy restricted isometry property (RIP) for optimized reconstruction of the image. The L1-norm minimization is solved by the interior point method. We first test the CS-based DOA array determination method on synthetic data constructed based on Shanghai seismic array. Compared to the CBM, synthetic test for data without noise shows that the new method can determine the true DOA with a super-high resolution. In the case of multiple sources, the new method can easily separate multiple DOAs. When data are contaminated by noise at various levels, the CS method is stable when the noise amplitude is lower than the signal amplitude. We also test the CS method for the Wenchuan earthquake. For different arrays with different apertures, we are able to obtain reliable DOAs with uncertainties lower than 10 degrees.
Mini-Sosie high-resolution seismic method aids hazards studies
Stephenson, W.J.; Odum, J.; Shedlock, K.M.; Pratt, T.L.; Williams, R.A.
1992-01-01
The Mini-Sosie high-resolution seismic method has been effective in imaging shallow-structure and stratigraphic features that aid in seismic-hazard and neotectonic studies. The method is not an alternative to Vibroseis acquisition for large-scale studies. However, it has two major advantages over Vibroseis as it is being used by the USGS in its seismic-hazards program. First, the sources are extremely portable and can be used in both rural and urban environments. Second, the shifting-and-summation process during acquisition improves the signal-to-noise ratio and cancels out seismic noise sources such as cars and pedestrians. -from Authors
NASA Technical Reports Server (NTRS)
Kovach, R. L.; Watkins, J. S.; Talwani, P.
1972-01-01
The Apollo 16 active seismic experiment (ASE) was designed to generate and monitor seismic waves for the study of the lunar near-surface structure. Several seismic energy sources are used: an astronaut-activated thumper device, a mortar package that contains rocket-launched grenades, and the impulse produced by the lunar module ascent. Analysis of some seismic signals recorded by the ASE has provided data concerning the near-surface structure at the Descartes landing site. Two compressional seismic velocities have so far been recognized in the seismic data. The deployment of the ASE is described, and the significant results obtained are discussed.
Research notes : base isolation bearings hold up.
DOT National Transportation Integrated Search
2003-05-01
Existing bridges, such as the Marquam Bridge in Portland, are given seismic retrofits, and new bridges are designed to more demanding seismic standards. Other structures may have a seismic device built in. The ODOT Research Unit has monitored one suc...
Seismic analysis and design of bridge abutments considering sliding and rotation
DOT National Transportation Integrated Search
1997-09-15
Current displacement based seismic design of gravity retaining walls utilizes a sliding block idealization, and considers only a translation mode of deformation. Authors update and extend the coupled equations of motion that appear in the literature....
Optimal observables for multiparameter seismic tomography
NASA Astrophysics Data System (ADS)
Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner
2014-08-01
We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.
A method to establish seismic noise baselines for automated station assessment
McNamara, D.E.; Hutt, C.R.; Gee, L.S.; Benz, H.M.; Buland, R.P.
2009-01-01
We present a method for quantifying station noise baselines and characterizing the spectral shape of out-of-nominal noise sources. Our intent is to automate this method in order to ensure that only the highest-quality data are used in rapid earthquake products at NEIC. In addition, the station noise baselines provide a valuable tool to support the quality control of GSN and ANSS backbone data and metadata. The procedures addressed here are currently in development at the NEIC, and work is underway to understand how quickly changes from nominal can be observed and used within the NEIC processing framework. The spectral methods and software used to compute station baselines and described herein (PQLX) can be useful to both permanent and portable seismic stations operators. Applications include: general seismic station and data quality control (QC), evaluation of instrument responses, assessment of near real-time communication system performance, characterization of site cultural noise conditions, and evaluation of sensor vault design, as well as assessment of gross network capabilities (McNamara et al. 2005). Future PQLX development plans include incorporating station baselines for automated QC methods and automating station status report generation and notification based on user-defined QC parameters. The PQLX software is available through the USGS (http://earthquake. usgs.gov/research/software/pqlx.php) and IRIS (http://www.iris.edu/software/ pqlx/).
NASA Astrophysics Data System (ADS)
Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2018-03-01
Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction
NASA Astrophysics Data System (ADS)
Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo
2014-12-01
To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.
Analysis of longitudinal seismic response of bridge with magneto-rheological elastomeric bearings
NASA Astrophysics Data System (ADS)
Li, Rui; Li, Xi; Wu, Yueyuan; Chen, Shiwei; Wang, Xiaojie
2016-04-01
As the weakest part in the bridge system, traditional bridge bearing is incapable of isolating the impact load such as earthquake. A magneto-rheological elastomeric bearing (MRB) with adjustable stiffness and damping parameters is designed, tested and modeled. The developed Bouc-Wen model is adopted to represent the constitutive relation and force-displacement behavior of an MRB. Then, the lead rubber bearing (LRB), passive MRB and controllable MRB are modeled by finite element method (FEM). Furthermore, two typical seismic waves are adopted as inputs for the isolation system of bridge seismic response. The experiments are carried out to investigate the different response along the bridge with on-off controlled MRBs. The results show that the isolating performance of MRB is similar to that of traditional LRB, which ensures the fail-safe capability of bridge with MRBs under seismic excitation. In addition, the controllable bridge with MRBs demonstrated the advantage of isolating capacity and energy dissipation, because it restrains the acceleration peak of bridge beam by 33.3%, and the displacement of bearing decrease by 34.1%. The shear force of the pier top is also alleviated.
NASA Astrophysics Data System (ADS)
Liu, Chuncheng; Wang, Chongyang; Mao, Long; Zha, Chuanming
2016-11-01
Substation high voltage electrical equipment such as mutual inductor, circuit interrupter, disconnecting switch, etc., has played a key role in maintaining the normal operation of the power system. When the earthquake disaster, the electrical equipment of the porcelain in the transformer substation is the most easily to damage, causing great economic losses. In this paper, using the method of numerical analysis, the establishment of a typical high voltage electrical equipment of three dimensional finite element model, to study the seismic response of a typical SF6 circuit breaker, at the same time, analysis and contrast the installation ring tuned mass damper (TMD damper for short), by changing the damper damping coefficient and the mass block, install annular TMD vibration control effect is studied. The results of the study for guiding the seismic design of high voltage electrical equipment to provide valuable reference.
Advanced Seismic While Drilling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Radtke; John Fontenot; David Glowka
A breakthrough has been discovered for controlling seismic sources to generate selectable low frequencies. Conventional seismic sources, including sparkers, rotary mechanical, hydraulic, air guns, and explosives, by their very nature produce high-frequencies. This is counter to the need for long signal transmission through rock. The patent pending SeismicPULSER{trademark} methodology has been developed for controlling otherwise high-frequency seismic sources to generate selectable low-frequency peak spectra applicable to many seismic applications. Specifically, we have demonstrated the application of a low-frequency sparker source which can be incorporated into a drill bit for Drill Bit Seismic While Drilling (SWD). To create the methodology ofmore » a controllable low-frequency sparker seismic source, it was necessary to learn how to maximize sparker efficiencies to couple to, and transmit through, rock with the study of sparker designs and mechanisms for (a) coupling the sparker-generated gas bubble expansion and contraction to the rock, (b) the effects of fluid properties and dynamics, (c) linear and non-linear acoustics, and (d) imparted force directionality. After extensive seismic modeling, the design of high-efficiency sparkers, laboratory high frequency sparker testing, and field tests were performed at the University of Texas Devine seismic test site. The conclusion of the field test was that extremely high power levels would be required to have the range required for deep, 15,000+ ft, high-temperature, high-pressure (HTHP) wells. Thereafter, more modeling and laboratory testing led to the discovery of a method to control a sparker that could generate low frequencies required for deep wells. The low frequency sparker was successfully tested at the Department of Energy Rocky Mountain Oilfield Test Center (DOE RMOTC) field test site in Casper, Wyoming. An 8-in diameter by 26-ft long SeismicPULSER{trademark} drill string tool was designed and manufactured by TII. An APS Turbine Alternator powered the SeismicPULSER{trademark} to produce two Hz frequency peak signals repeated every 20 seconds. Since the ION Geophysical, Inc. (ION) seismic survey surface recording system was designed to detect a minimum downhole signal of three Hz, successful performance was confirmed with a 5.3 Hz recording with the pumps running. The two Hz signal generated by the sparker was modulated with the 3.3 Hz signal produced by the mud pumps to create an intense 5.3 Hz peak frequency signal. The low frequency sparker source is ultimately capable of generating selectable peak frequencies of 1 to 40 Hz with high-frequency spectra content to 10 kHz. The lower frequencies and, perhaps, low-frequency sweeps, are needed to achieve sufficient range and resolution for realtime imaging in deep (15,000 ft+), high-temperature (150 C) wells for (a) geosteering, (b) accurate seismic hole depth, (c) accurate pore pressure determinations ahead of the bit, (d) near wellbore diagnostics with a downhole receiver and wired drill pipe, and (e) reservoir model verification. Furthermore, the pressure of the sparker bubble will disintegrate rock resulting in an increased overall rates of penetration. Other applications for the SeismicPULSER{trademark} technology are to deploy a low-frequency source for greater range on a wireline for Reverse Vertical Seismic Profiling (RVSP) and Cross-Well Tomography. Commercialization of the technology is being undertaken by first contacting stakeholders to define the value proposition for rig site services utilizing SeismicPULSER{trademark} technologies. Stakeholders include national oil companies, independent oil companies, independents, service companies, and commercial investors. Service companies will introduce a new Drill Bit SWD service for deep HTHP wells. Collaboration will be encouraged between stakeholders in the form of joint industry projects to develop prototype tools and initial field trials. No barriers have been identified for developing, utilizing, and exploiting the low-frequency SeismicPULSER{trademark} source in a variety of applications. Risks will be minimized since Drill Bit SWD will not interfere with the drilling operation, and can be performed in a relatively quiet environment when the pumps are turned off. The new source must be integrated with other Measurement While Drilling (MWD) tools. To date, each of the oil companies and service companies contacted have shown interest in participating in the commercialization of the low-frequency SeismicPULSER{trademark} source. A technical paper has been accepted for presentation at the 2009 Offshore Technology Conference (OTC) in a Society of Exploration Geologists/American Association of Petroleum Geophysicists (SEG/AAPG) technical session.« less
NASA Astrophysics Data System (ADS)
Filiatrault, Andre; Sullivan, Timothy
2014-08-01
With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major knowledge gaps that will need to be filled by future research. Furthermore, considering recent trends in earthquake engineering, the paper explores how performance-based seismic design might be conceived for nonstructural components, drawing on recent developments made in the field of seismic design and hinting at the specific considerations required for nonstructural components.
Seismic vulnerability of new highway construction, executive summary.
DOT National Transportation Integrated Search
2002-03-01
This executive summary gives an overview of the results of FHWA Contract DTFH61-92-C-00112, Seismic Research Program, : which performed a series of special studies addressing the seismic design of new construction. The objectives of this project : we...
Seismic Vulnerability of New Highway Construction, Executive Summary.
DOT National Transportation Integrated Search
2002-03-01
This executive summary gives an overview of the results of FHWA Contract DTFH61-92-C-00112, Seismic Research Program, which performed a series of special studies addressing the seismic design of new construction. The objectives of this project were t...
The Advanced National Seismic System; management and implementation
Benz, H.M.; Shedlock, K.M.; Buland, R.P.
2001-01-01
What is the Advanced National Seismic System? The Advanced National Seismic System (ANSS) is designed to organize, modernize, and standardize operations of seismic networks in the United States to improve the Nation’s ability to respond effectively to damaging earthquakes, volcanoes, and tsunamis. To achieve this, the ANSS will link more than 7,000 national, regional and urban monitoring stations in real time
NASA Astrophysics Data System (ADS)
Wei, Jia; Liu, Huaishan; Xing, Lei; Du, Dong
2018-02-01
The stability of submarine geological structures has a crucial influence on the construction of offshore engineering projects and the exploitation of seabed resources. Marine geologists should possess a detailed understanding of common submarine geological hazards. Current marine seismic exploration methods are based on the most effective detection technologies. Therefore, current research focuses on improving the resolution and precision of shallow stratum structure detection methods. In this article, the feasibility of shallow seismic structure imaging is assessed by building a complex model, and differences between the seismic interferometry imaging method and the traditional imaging method are discussed. The imaging effect of the model is better for shallow layers than for deep layers because coherent noise produced by this method can result in an unsatisfactory imaging effect for deep layers. The seismic interference method has certain advantages for geological structural imaging of shallow submarine strata, which indicates continuous horizontal events, a high resolution, a clear fault, and an obvious structure boundary. The effects of the actual data applied to the Shenhu area can fully illustrate the advantages of the method. Thus, this method has the potential to provide new insights for shallow submarine strata imaging in the area.
Impacts of potential seismic landslides on lifeline corridors.
DOT National Transportation Integrated Search
2015-02-01
This report presents a fully probabilistic method for regional seismically induced landslide hazard analysis and : mapping. The method considers the most current predictions for strong ground motions and seismic sources : through use of the U.S.G.S. ...
VS30, site amplifications and some comparisons: The Adapazari (Turkey) case
NASA Astrophysics Data System (ADS)
Ozcep, Tazegul; Ozcep, Ferhat; Ozel, Oguz
The aim of this study was to investigate the role of VS30 in site amplifications in the Adapazari region, Turkey. To fulfil this aim, amplifications from VS30 measurements were compared with earthquake data for different soil types in the seismic design codes. The Adapazari area was selected as the study area, and shear-wave velocity distribution was obtained by the multichannel analysis of surface waves (MASWs) method at 100 sites for the top 50 m of soil. Aftershock data following the Mw 7.4 Izmit earthquake of 17 August 1999 gave magnitudes between 4.0 and 5.6 at six stations installed in and around the Adapazari Basin, at Babalı, Şeker, Genç, Hastane, Toyota and Imar. This data was used to estimate site amplifications by the reference-station method. In addition, the fundamental periods of the station sites were estimated by the single station method. Site classifications based on VS30 in the seismic design codes were compared with the fundamental periods and amplification values. It was found that site amplifications (from earthquake data) and relevant spectra (from VS30) are not in good agreement for soils in Adapazari (Turkey).
NASA Astrophysics Data System (ADS)
Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.
2015-12-01
Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.
Fast 3D elastic micro-seismic source location using new GPU features
NASA Astrophysics Data System (ADS)
Xue, Qingfeng; Wang, Yibo; Chang, Xu
2016-12-01
In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.
Seismic behavior of outrigger truss-wall shear connections using multiple steel angles
NASA Astrophysics Data System (ADS)
Li, Xian; Wang, Wei; Lü, Henglin; Zhang, Guangchang
2016-06-01
An experimental investigation on the seismic behavior of a type of outrigger truss-reinforced concrete wall shear connection using multiple steel angles is presented. Six large-scale shear connection models, which involved a portion of reinforced concrete wall and a shear tab welded onto a steel endplate with three steel angles, were constructed and tested under combined actions of cyclic axial load and eccentric shear. The effects of embedment lengths of steel angles, wall boundary elements, types of anchor plates, and thicknesses of endplates were investigated. The test results indicate that properly detailed connections exhibit desirable seismic behavior and fail due to the ductile fracture of steel angles. Wall boundary elements provide beneficial confinement to the concrete surrounding steel angles and thus increase the strength and stiffness of connections. Connections using whole anchor plates are prone to suffer concrete pry-out failure while connections with thin endplates have a relatively low strength and fail due to large inelastic deformations of the endplates. The current design equations proposed by Chinese Standard 04G362 and Code GB50011 significantly underestimate the capacities of the connection models. A revised design method to account for the influence of previously mentioned test parameters was developed.
DOT National Transportation Integrated Search
2012-11-01
Generic, code-based design procedures cannot account for the anticipated short-period attenuation and long-period amplification of earthquake ground motions in the deep, soft sediments of the Mississippi Embayment within the New Madrid Seismic Zone (...
DOT National Transportation Integrated Search
2012-03-01
NJDOT has adopted AASHTO Guide Specifications for LRFD Seismic Bridge Design approved by the Highway : Subcommittee on Bridges and Structures in 2007. The main objective of research presented in this report has : been to resolve following issue...
Is 3D true non linear traveltime tomography reasonable ?
NASA Astrophysics Data System (ADS)
Herrero, A.; Virieux, J.
2003-04-01
The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.
NASA Astrophysics Data System (ADS)
Chia, Kenny; Lau, Tze Liang
2017-07-01
Despite categorized as low seismicity group, until being affected by distant earthquake ground motion from Sumatra and the recent 2015 Sabah Earthquake, Malaysia has come to realize that seismic hazard in the country is real and has the potential to threaten the public safety and welfare. The major concern in this paper is to study the effect of local site condition, where it could amplify the magnitude of ground vibration at sites. The aim for this study is to correlate the thickness of soft stratum with the predominant frequency of soil. Single point microtremor measurements were carried out at 24 selected points where the site investigation reports are available. Predominant period and frequency at each site are determined by Nakamura's method. The predominant period varies from 0.22 s to 0.98 s. Generally, the predominant period increases when getting closer to the shoreline which has thicker sediments. As far as the thickness of the soft stratum could influence the amplification of seismic wave, the advancement of micotremor observation to predict the thickness of soft stratum (h) from predominant frequency (fr) is of the concern. Thus an empirical relationship h =54.917 fr-1.314 is developed based on the microtremor observation data. The empirical relationship will be benefited in the prediction of thickness of soft stratum based on microtremor observation for seismic design with minimal cost compared to conventional boring method.
Intensity Based Seismic Hazard Map of Republic of Macedonia
NASA Astrophysics Data System (ADS)
Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta
2016-04-01
The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the probability for occurrence of future earthquakes with a defined area distribution of their seismic intensity, depending on the natural characteristics of the terrain. The period of 10.000 years represents the greatest expected seismic threat for the considered area. From the aspect of low-cost construction, it is also necessary to know the seismicity in shorter time periods, as well. Therefore, maps for return time periods of 50, 100, 200, 500 and 1000 years have also been elaborated. The maps show a probability of 63% for occurrence of expected earthquakes with maximum intensities expressed on the MCS scale. The map has been elaborated to the scale of 1: 1.000.000, while the obtained isolines of seismic intensity are drawn with an error of 5 km. The seismic hazard map of R. Macedonia is used for: • The needs of the Rulebook on Technical Norms on Construction of Structures in Seismic Areas and for the needs of physical and urban planning and design. • While defining the seismic design parameters for construction of structures in zones with intensity of I VII degrees MSK, investigations should be done for detailed seismic zoning and microzoning of the terrain of these zones in compliance with the technical regulations for construction in seismically prone areas. • The areas on the map indicated by intensity X MCS are not regulated by the valid regulations. Therefore, in practice, these should be treated as such in which it is not possible to construct any structures without previous surveys. • Revision of this map is done at a five year period, i.e., after each occurred earthquake whose parameters are such that require modifications and amendments of the map.
NASA Astrophysics Data System (ADS)
Su, Chin-Kuo; Sung, Yu-Chi; Chang, Shuenn-Yih; Huang, Chao-Hsun
2007-09-01
Strong near-fault ground motion, usually caused by the fault-rupture and characterized by a pulse-like velocity-wave form, often causes dramatic instantaneous seismic energy (Jadhav and Jangid 2006). Some reinforced concrete (RC) bridge columns, even those built according to ductile design principles, were damaged in the 1999 Chi-Chi earthquake. Thus, it is very important to evaluate the seismic response of a RC bridge column to improve its seismic design and prevent future damage. Nonlinear time history analysis using step-by-step integration is capable of tracing the dynamic response of a structure during the entire vibration period and is able to accommodate the pulsing wave form. However, the accuracy of the numerical results is very sensitive to the modeling of the nonlinear load-deformation relationship of the structural member. FEMA 273 and ATC-40 provide the modeling parameters for structural nonlinear analyses of RC beams and RC columns. They use three parameters to define the plastic rotation angles and a residual strength ratio to describe the nonlinear load-deformation relationship of an RC member. Structural nonlinear analyses are performed based on these parameters. This method provides a convenient way to obtain the nonlinear seismic responses of RC structures. However, the accuracy of the numerical solutions might be further improved. For this purpose, results from a previous study on modeling of the static pushover analyses for RC bridge columns (Sung et al. 2005) is adopted for the nonlinear time history analysis presented herein to evaluate the structural responses excited by a near-fault ground motion. To ensure the reliability of this approach, the numerical results were compared to experimental results. The results confirm that the proposed approach is valid.
A seismic fault recognition method based on ant colony optimization
NASA Astrophysics Data System (ADS)
Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong
2018-05-01
Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.
Risk-targeted versus current seismic design maps for the conterminous United States
Luco, Nicolas; Ellingwood, Bruce R.; Hamburger, Ronald O.; Hooper, John D.; Kimball, Jeffrey K.; Kircher, Charles A.
2007-01-01
The probabilistic portions of the seismic design maps in the NEHRP Provisions (FEMA, 2003/2000/1997), and in the International Building Code (ICC, 2006/2003/2000) and ASCE Standard 7-05 (ASCE, 2005a), provide ground motion values from the USGS that have a 2% probability of being exceeded in 50 years. Under the assumption that the capacity against collapse of structures designed for these "uniformhazard" ground motions is equal to, without uncertainty, the corresponding mapped value at the location of the structure, the probability of its collapse in 50 years is also uniform. This is not the case however, when it is recognized that there is, in fact, uncertainty in the structural capacity. In that case, siteto-site variability in the shape of ground motion hazard curves results in a lack of uniformity. This paper explains the basis for proposed adjustments to the uniform-hazard portions of the seismic design maps currently in the NEHRP Provisions that result in uniform estimated collapse probability. For seismic design of nuclear facilities, analogous but specialized adjustments have recently been defined in ASCE Standard 43-05 (ASCE, 2005b). In support of the 2009 update of the NEHRP Provisions currently being conducted by the Building Seismic Safety Council (BSSC), herein we provide examples of the adjusted ground motions for a selected target collapse probability (or target risk). Relative to the probabilistic MCE ground motions currently in the NEHRP Provisions, the risk-targeted ground motions for design are smaller (by as much as about 30%) in the New Madrid Seismic Zone, near Charleston, South Carolina, and in the coastal region of Oregon, with relatively little (<15%) change almost everywhere else in the conterminous U.S.
Geophysical remote sensing of water reservoirs suitable for desalinization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David Franklin; Bartel, Lewis Clark; Bonal, Nedra
2009-12-01
In many parts of the United States, as well as other regions of the world, competing demands for fresh water or water suitable for desalination are outstripping sustainable supplies. In these areas, new water supplies are necessary to sustain economic development and agricultural uses, as well as support expanding populations, particularly in the Southwestern United States. Increasing the supply of water will more than likely come through desalinization of water reservoirs that are not suitable for present use. Surface-deployed seismic and electromagnetic (EM) methods have the potential for addressing these critical issues within large volumes of an aquifer at amore » lower cost than drilling and sampling. However, for detailed analysis of the water quality, some sampling utilizing boreholes would be required with geophysical methods being employed to extrapolate these sampled results to non-sampled regions of the aquifer. The research in this report addresses using seismic and EM methods in two complimentary ways to aid in the identification of water reservoirs that are suitable for desalinization. The first method uses the seismic data to constrain the earth structure so that detailed EM modeling can estimate the pore water conductivity, and hence the salinity. The second method utilizes the coupling of seismic and EM waves through the seismo-electric (conversion of seismic energy to electrical energy) and the electro-seismic (conversion of electrical energy to seismic energy) to estimate the salinity of the target aquifer. Analytic 1D solutions to coupled pressure and electric wave propagation demonstrate the types of waves one expects when using a seismic or electric source. A 2D seismo-electric/electro-seismic is developed to demonstrate the coupled seismic and EM system. For finite-difference modeling, the seismic and EM wave propagation algorithms are on different spatial and temporal scales. We present a method to solve multiple, finite-difference physics problems that has application beyond the present use. A limited field experiment was conducted to assess the seismo-electric effect. Due to a variety of problems, the observation of the electric field due to a seismic source is not definitive.« less
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
NASA Technical Reports Server (NTRS)
Cousineau, R. D.; Crook, R., Jr.; Leeds, D. J.
1985-01-01
This report discusses a geological and seismological investigation of the NASA Ames-Dryden Flight Research Facility site at Edwards, California. Results are presented as seismic design criteria, with design values of the pertinent ground motion parameters, probability of recurrence, and recommended analogous time-history accelerograms with their corresponding spectra. The recommendations apply specifically to the Dryden site and should not be extrapolated to other sites with varying foundation and geologic conditions or different seismic environments.
ERIC Educational Resources Information Center
Donovan, Neville
1979-01-01
Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulsson, Bjorn N.P.
2016-06-29
To address the critical site characterization and monitoring needs for Enhance Geothermal Systems (EGS) programs, US Department of Energy (DOE) awarded Paulsson, Inc. in 2011 a contract to design, build and test a high temperature fiber optic based ultra-large bandwidth clamped borehole seismic vector array capable of deploying a large number of 3C sensor pods suitable for deployment into high temperature and high pressure boreholes. Paulsson, Inc. has completed a design or a unique borehole seismic system consisting of a novel drill pipe based deployment system that includes a hydraulic clamping mechanism for the sensor pods, a new sensor podmore » design and most important – a unique fiber optic seismic vector sensor with technical specifications and capabilities that far exceed the state of the art seismic sensor technologies. These novel technologies were all applied to the new borehole seismic system. In combination these technologies will allow for the deployment of up to 1,000 3C sensor pods in vertical, deviated or horizontal wells. Laboratory tests of the fiber optic seismic vector sensors developed during this project have shown that the new borehole seismic sensor technology is capable of generating outstanding high vector fidelity data with extremely large bandwidth: 0.01 – 6,000 Hz. Field tests have shown that the system can record events at magnitudes much smaller than M-4.0 at frequencies over 2,000 Hz. The sensors have also proved to be about 100 times more sensitive than the regular coil geophones that are used in borehole seismic systems today. The fiber optic seismic sensors have furthermore been qualified to operate at temperatures over 300°C (572°F). The data telemetry fibers used for the seismic vector sensors in the system are also used to simultaneously record Distributed Temperature Sensor (DTS) and Distributed Acoustic Sensor (DAS) data allowing additional value added data to be recorded simultaneously with the seismic vector sensor data.« less
Evaluation of ground motion scaling methods for analysis of structural systems
O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.
2011-01-01
Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.
DOT National Transportation Integrated Search
2008-05-01
This study was undertaken with the objective of assessing the current provisions in SDC-2006 for incorporating : vertical effects of ground motions in seismic evaluation and design of ordinary highway bridges. A : comprehensive series of simulations ...
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
Romanian Educational Seismic Network Project
NASA Astrophysics Data System (ADS)
Tataru, Dragos; Ionescu, Constantin; Zaharia, Bogdan; Grecu, Bogdan; Tibu, Speranta; Popa, Mihaela; Borleanu, Felix; Toma, Dragos; Brisan, Nicoleta; Georgescu, Emil-Sever; Dobre, Daniela; Dragomir, Claudiu-Sorin
2013-04-01
Romania is one of the most active seismic countries in Europe, with more than 500 earthquakes occurring every year. The seismic hazard of Romania is relatively high and thus understanding the earthquake phenomena and their effects at the earth surface represents an important step toward the education of population in earthquake affected regions of the country and aims to raise the awareness about the earthquake risk and possible mitigation actions. In this direction, the first national educational project in the field of seismology has recently started in Romania: the ROmanian EDUcational SEISmic NETwork (ROEDUSEIS-NET) project. It involves four partners: the National Institute for Earth Physics as coordinator, the National Institute for Research and Development in Construction, Urban Planning and Sustainable Spatial Development " URBAN - INCERC" Bucharest, the Babeş-Bolyai University (Faculty of Environmental Sciences and Engineering) and the software firm "BETA Software". The project has many educational, scientific and social goals. The main educational objectives are: training students and teachers in the analysis and interpretation of seismological data, preparing of several comprehensive educational materials, designing and testing didactic activities using informatics and web-oriented tools. The scientific objective is to introduce into schools the use of advanced instruments and experimental methods that are usually restricted to research laboratories, with the main product being the creation of an earthquake waveform archive. Thus a large amount of such data will be used by students and teachers for educational purposes. For the social objectives, the project represents an effective instrument for informing and creating an awareness of the seismic risk, for experimentation into the efficacy of scientific communication, and for an increase in the direct involvement of schools and the general public. A network of nine seismic stations with SEP seismometers will be installed in several schools in the most important seismic areas (Vrancea, Dobrogea), vulnerable cities (Bucharest, Ploiesti, Iasi) or high populated places (Cluj, Sibiu, Timisoara, Zalău). All the elements of the seismic station are especially designed for educational purposes and can be operated independently by the students and teachers themselves. The first stage of ROEDUSEIS project was centered on the work of achievement of educational materials for all levels of pre-university education (kindergarten, primary, secondary and high school). A study of necessity preceded the achievement of educational materials. This was done through a set of questionnaires for teachers and students sent to participating schools. Their responses formed a feedback instrument for properly materials editing. The topics covered within educational materials include: seismicity (general principles, characteristics of Romanian seismicity, historical local events), structure of the Earth, measuring of earthquakes, seismic hazard and risk.
Moment tensor analysis of very shallow sources
Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.; ...
2016-10-11
An issue for moment tensor (MT) inversion of shallow seismic sources is that some components of the Green’s functions have vanishing amplitudes at the free surface, which can result in bias in the MT solution. The effects of the free surface on the stability of the MT method become important as we continue to investigate and improve the capabilities of regional full MT inversion for source–type identification and discrimination. It is important to understand free–surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have very shallow seismicity, such asmore » volcanic and geothermal systems. We examine the effects of the free surface on the MT via synthetic testing and apply the MT–based discrimination method to three quarry blasts from the HUMMING ALBATROSS experiment. These shallow chemical explosions at ~10 m depth and recorded up to several kilometers distance represent rather severe source–station geometry in terms of free–surface effects. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first–motion method enables the unique discrimination of these events. Furthermore, recovering the design yield using seismic moment estimates from MT inversion remains challenging, but we can begin to put error bounds on our moment estimates using the network sensitivity solution technique.« less
Receiver deghosting in the t-x domain based on super-Gaussianity
NASA Astrophysics Data System (ADS)
Lu, Wenkai; Xu, Ziqiang; Fang, Zhongyu; Wang, Ruiliang; Yan, Chengzhi
2017-01-01
Deghosting methods in the time-space (t-x) domain have attracted a lot of attention because of their flexibility for various source/receiver configurations. Based on the well-known knowledge that the seismic signal has a super-Gaussian distribution, we present a Super-Gaussianity based Receiver Deghosting (SRD) method in the t-x domain. In our method, we denote the upgoing wave and its ghost (downgoing wave) as a single seismic signal, and express the relationship between the upgoing wave and its ghost using two ghost parameters: the sea surface reflection coefficient and the time-shift between the upgoing wave and its ghost. For a single seismic signal, we estimate these two parameters by maximizing the super-Gaussianity of the deghosted output, which is achieved by a 2D grid search method using an adaptively predefined discrete solution space. Since usually a large number of seismic signals are mixed together in a seismic trace, in the proposed method we divide the seismic trace into overlapping frames using a sliding time window with a step of one time sample, and consider each frame as a replacement for a single seismic signal. For a 2D seismic gather, we obtain two 2D maps of the ghost parameters. By assuming that these two parameters vary slowly in the t-x domain, we apply a 2D average filter to these maps, to improve their reliability further. Finally, these deghosted outputs are merged to form the final deghosted result. To demonstrate the flexibility of the proposed method for arbitrary variable depths of the receivers, we apply it to several synthetic and field seismic datasets acquired by variable depth streamer.
Shi, Z.; Tian, G.; Dong, S.; Xia, J.; He, H.; ,
2004-01-01
In a desert area, it is difficult to couple geophones with dry sands. A low and depression velocity layer can seriously attenuate high frequency components of seismic data. Therefore, resolution and signal-to-noise (S/N) ratio of seismic data deteriorate. To enhance resolution and S/N ratio of seismic data, we designed a coupling compensatory inverse filter by using the single trace seismic data from Seismic Wave Detect System (SWDS) and common receivers on equal conditions. We designed an attenuating compensatory inverse filter by using seismic data from a microseismogram log. At last, in order to convert a shot gather from common receivers to a shot gather from SWDS, we applied the coupling compensatory inverse filter to the shot gather from common receivers. And then we applied the attenuating compensatory inverse filter to the coupling stacked seismic data to increase its resolution and S/N ratio. The results show that the resolution of seismic data from common receivers after processing by using the coupling compensatory inverse filter is nearly comparable with that of data from SWDS. It is also found that the resolution and S/N ratio have been enhanced after the use of attenuating compensatory inverse filter. From the results, we can conclude that the filters can compensate high frequencies of seismic data. Moreover, the low frequency changed nearly.
NASA Astrophysics Data System (ADS)
Ramanathan, Karthik Narayan
Quantitative and qualitative assessment of the seismic risk to highway bridges is crucial in pre-earthquake planning, and post-earthquake response of transportation systems. Such assessments provide valuable knowledge about a number of principal effects of earthquakes such as traffic disruption of the overall highway system, impact on the regions’ economy and post-earthquake response and recovery, and more recently serve as measures to quantify resilience. Unlike previous work, this study captures unique bridge design attributes specific to California bridge classes along with their evolution over three significant design eras, separated by the historic 1971 San Fernando and 1989 Loma Prieta earthquakes (these events affected changes in bridge seismic design philosophy). This research developed next-generation fragility curves for four multispan concrete bridge classes by synthesizing new knowledge and emerging modeling capabilities, and by closely coordinating new and ongoing national research initiatives with expertise from bridge designers. A multi-phase framework was developed for generating fragility curves, which provides decision makers with essential tools for emergency response, design, planning, policy support, and maximizing investments in bridge retrofit. This framework encompasses generational changes in bridge design and construction details. Parameterized high-fidelity three-dimensional nonlinear analytical models are developed for the portfolios of bridge classes within different design eras. These models incorporate a wide range of geometric and material uncertainties, and their responses are characterized under seismic loadings. Fragility curves were then developed considering the vulnerability of multiple components and thereby help to quantify the performance of highway bridge networks and to study the impact of seismic design principles on the performance within a bridge class. This not only leads to the development of fragility relations that are unique and better suited for bridges in California, but also leads to the creation of better bridge classes and sub-bins that have more consistent performance characteristics than those currently provided by the National Bridge Inventory. Another important feature of this research is associated with the development of damage state definitions and grouping of bridge components in a way that they have similar consequences in terms of repair and traffic implications following a seismic event. These definitions are in alignment with the California Department of Transportation’s design and operational experience, thereby enabling better performance assessment, emergency response, and management in the aftermath of a seismic event. The fragility curves developed as a part of this research will be employed in ShakeCast, a web-based post-earthquake situational awareness application that automatically retrieves earthquake shaking data and generates potential damage assessment notifications for emergency managers and responders.
Seismic Performance Evaluation of Reinforced Concrete Frames Subjected to Seismic Loads
NASA Astrophysics Data System (ADS)
Zameeruddin, Mohd.; Sangle, Keshav K.
2017-06-01
Ten storied-3 bays reinforced concrete bare frame designed for gravity loads following the guidelines of IS 456 and IS 13920 for ductility is subjected to seismic loads. The seismic demands on this building were calculated by following IS 1893 for response spectra of 5% damping (for hard soil type). Plastic hinges were assigned to the beam and column at both ends to represent the failure mode, when member yields. Non-linear static (pushover) analysis was performed to evaluate the performance of the building in reference to first (ATC 40), second (FEMA 356) and next-generation (FEMA 440) performance based seismic design procedures. Base shear against top displacement curve of structure, known as pushover curve was obtained for two actions of plastic hinge behavior, force-controlled (brittle) and deformation-controlled (ductile) actions. Lateral deformation corresponding to performance point proves the building capability to sustain a certain level of seismic loads. The failure is represented by a sequence of formation of plastic hinges. Deformation-controlled action of hinges showed that building behaves like strong-column-weak-beam mechanism, whereas force-controlled action showed formation of hinges in the column. The study aims to understand the first, second and next generation performance based design procedure in prediction of actual building responses and their conservatism into the acceptance criteria.
MEASUREMENT OF COMPRESSIONAL-WAVE SEISMIC VELOCITIES IN 29 WELLS AT THE HANFORD SITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
PETERSON SW
2010-10-08
Check shot seismic velocity surveys were collected in 100 B/C, 200 East, 200-PO-1 Operational Unit (OU), and the Gable Gap areas in order to provide time-depth correlation information to aid the interpretation of existing seismic reflection data acquired at the Hanford Site (Figure 1). This report details results from 5 wells surveyed in fiscal year (FY) 2008, 7 wells in FY 2009, and 17 wells in FY 2010 and provides summary compressional-wave seismic velocity information to help guide future seismic survey design as well as improve current interpretations of the seismic data (SSC 1979/1980; SGW-39675; SGW-43746). Augmenting the check shotmore » database are four surveys acquired in 2007 in support of the Bechtel National, Inc. Waste Treatment Plant construction design (PNNL-16559, PNNL-16652), and check shot surveys in three wells to support seismic testing in the 200 West Area (Waddell et al., 1999). Additional sonic logging was conducted during the late 1970s and early 1980s as part of the Basalt Waste Isolation Program (BWIP) (SSC 1979/1980) and check shot/sonic surveys as part of the safety report for the Skagit/Hanford Nuclear project (RDH/10-AMCP-0164). Check shot surveys are used to obtain an in situ measure of compressional-wave seismic velocity for sediment and rock in the vicinity of the well point, and provide the seismic-wave travel time to geologic horizons of interest. The check shot method deploys a downhole seismic receiver (geophone) to record the arrival of seismic waves generated by a source at the ground surface. The travel time of the first arriving seismic-wave is determined and used to create a time-depth function to correlate encountered geologic intervals with the seismic data. This critical tie with the underlying geology improves the interpretation of seismic reflection profile information. Fieldwork for this investigation was conducted by in house staff during the weeks of September 22, 2008 for 5 wells in the 200 East Area (Figure 2); June 1, 2009 for 7 wells in the 200-PO-1 OU and Gable Gap regions (see Figure 3 and Figure 4); and March 22, 2010 and April 19, 2010 for 17 wells in the 200 East, The initial scope of survey work was planned for Wells 299-EI8-1, 699-2-E14, 699-12-18, 699-16-51, 699-42-30, 699-53-55B, 699-54-18D, and 699-84-34B. Well 299-E18-1 could not be entered due to bent casing (prevented removal of the pump), wells 699-12-18 and 699-42-30 could not be safely reached by the logging truck, Well 699-16-51 was decommissioned prior to survey start, Well 699-53-55B did not have its pump pulled, and Wells 699-2-EI4, 699-54-18D, and 699-84-34B are artesian and capped with an igloo structure. Table 1 provides a list of wells that were surveyed and Figure 1 through Figure 5 show the well locations relative to the Hanford Site.« less
Evaluating the Use of Declustering for Induced Seismicity Hazard Assessment
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Michael, A. J.
2016-12-01
The recent dramatic seismicity rate increase in the central and eastern US (CEUS) has motivated the development of seismic hazard assessments for induced seismicity (e.g., Petersen et al., 2016). Standard probabilistic seismic hazard assessment (PSHA) relies fundamentally on the assumption that seismicity is Poissonian (Cornell, BSSA, 1968); therefore, the earthquake catalogs used in PSHA are typically declustered (e.g., Petersen et al., 2014) even though this may remove earthquakes that may cause damage or concern (Petersen et al., 2015; 2016). In some induced earthquake sequences in the CEUS, the standard declustering can remove up to 90% of the sequence, reducing the estimated seismicity rate by a factor of 10 compared to estimates from the complete catalog. In tectonic regions the reduction is often only about a factor of 2. We investigate how three declustering methods treat induced seismicity: the window-based Gardner-Knopoff (GK) algorithm, often used for PSHA (Gardner and Knopoff, BSSA, 1974); the link-based Reasenberg algorithm (Reasenberg, JGR,1985); and a stochastic declustering method based on a space-time Epidemic-Type Aftershock Sequence model (Ogata, JASA, 1988; Zhuang et al., JASA, 2002). We apply these methods to three catalogs that likely contain some induced seismicity. For the Guy-Greenbrier, AR earthquake swarm from 2010-2013, declustering reduces the seismicity rate by factors of 6-14, depending on the algorithm. In northern Oklahoma and southern Kansas from 2010-2015, the reduction varies from factors of 1.5-20. In the Salton Trough of southern California from 1975-2013, the rate is reduced by factors of 3-20. Stochastic declustering tends to remove the most events, followed by the GK method, while the Reasenberg method removes the fewest. Given that declustering and choice of algorithm have such a large impact on the resulting seismicity rate estimates, we suggest that more accurate hazard assessments may be found using the complete catalog.
NASA Astrophysics Data System (ADS)
Eichhubl, Peter; Frohlich, Cliff; Gale, Julia; Olson, Jon; Fan, Zhiqiang; Gono, Valerie
2014-05-01
Induced seismicity during or following the subsurface injection of waste fluids such as well stimulation flow back and production fluids has recently received heightened public and industry attention. It is understood that induced seismicity occurs by reactivation of existing faults that are generally present in the injection intervals. We seek to address the question why fluid injection triggers earthquakes in some areas and not in others, with the aim toward improved injection methods that optimize injection volume and cost while avoiding induced seismicity. A GIS database has been built of natural and induced earthquakes in four hydrocarbon-producing basins: the Fort Worth Basin, South Texas, East Texas/Louisiana, and the Williston Basin. These areas are associated with disposal from the Barnett, Eagle Ford, Bakken, and Haynesville Shales respectively. In each region we analyzed data that were been collected using temporary seismographs of the National Science Foundation's USArray Transportable Array. Injection well locations, formations, histories, and volumes are also mapped using public and licensed datasets. Faults are mapped at a range of scales for selected areas that show different levels of seismic activity, and scaling relationships used to extrapolate between the seismic and wellbore scale. Reactivation potential of these faults is assessed using fault occurrence, and in-situ stress conditions, identifying areas of high and low fault reactivation potential. A correlation analysis between fault reactivation potential, induced seismicity, and fluid injection will use spatial statistics to quantify the probability of seismic fault reactivation for a given injection pressure in the studied reservoirs. The limiting conditions inducing fault reactivation will be compared to actual injection parameters (volume, rate, injection duration and frequency) where available. The objective of this project is a statistical reservoir- to basin-scale assessment of fault reactivation and seismicity induced by fluid injection. By assessing the occurrence of earthquakes (M>2) evenly across large geographic regions, this project differs from previous studies of injection-induced seismicity that focused on earthquakes large enough to cause public concern in well-populated areas. The understanding of triggered seismicity gained through this project is expected to allow for improved design strategies for waste fluid injection to industry and public decision makers.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
NASA Astrophysics Data System (ADS)
Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico
2016-04-01
A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced earthquake magnitude. Deterministic methods are based on McGarr's upper limit for the maximum induced seismic moment. Probabilistic methods rely on estimates of Shapiro's seismogenic index and seismicity rates from past stimulation experiments that are scaled to injection volumes of interest. Using rate-and-state frictional modelling coupled to a hydro-mechanical fracture flow model, we demonstrate that large uncontrolled rupture events are unlikely to occur and that deterministic upper limits may be sufficiently conservative. The proposed workflow can be applied to similar injection experiments, for which hazard to nearby infrastructure may limit experimental design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Kimball, B.; Davis, R.A.
1984-01-01
The Brazoria seismic network, instrumentation, design, and specifications are described. The data analysis procedures are presented. Seismicity is described in relation to the Pleasant Bayou production history. Seismicity originating near the chemical plant east of the geopressured/geothermal well is discussed. (MHR)
NASA Astrophysics Data System (ADS)
Mauk, F. J.; Kimball, B.; Davis, R. A.
The Brazoria seismic network, instrumentation, design, and specifications are described. The data analysis procedures are presented. Seismicity is described in relation to the Pleasant Bayou production history. Seismicity originating near the chemical plant east of the geopressured/geothermal well is discussed.
NASA Astrophysics Data System (ADS)
Meletti, C.
2013-05-01
In 2003, a large national project fur updating the seismic hazard map and the seismic zoning in Italy started, according to the rules fixed by an Ordinance by Italian Prime Minister. New input elements for probabilistic seismic hazard assessment were compiled: the earthquake catalogue, the seismogenic zonation, the catalogue completeness, a set of new attenuation relationships. The map of expected PGA on rock soil condition with 10% probability of exceedance is the new reference seismic hazard map for Italy (http://zonesismiche.mi.ingv.it). In the following, further 9 probabilities of exceedance and the uniform hazard spectra up to 2 seconds together with the disaggregation of the PGA was also released. A comprehensive seismic hazard model that fully describes the seismic hazard in Italy was then available, accessible by a webGis application (http://esse1-gis.mi.ingv.it/en.php). The detailed information make possible to change the approach for evaluating the proper seismic action for designing: from a zone-dependent approach (in Italy there were 4 seismic zones, each one with a single design spectrum) to a site-dependent approach: the design spectrum is now defined at each site of a grid of about 11000 points covering the whole national territory. The new building code becomes mandatory only after the 6 April 2009 L'Aquila earthquake, the first strong event in Italy after the release of the seismic hazard map. The large number of recordings and the values of the experienced accelerations suggested the comparisons between the recorded spectra and spectra defined in the seismic codes Even if such comparisons could be robust only after several consecutive 50-year periods of observation and in a probabilistic approach it is not a single observation that can validate or not the hazard estimate, some of the comparisons that can be undertaken between the observed ground motions and the hazard model used for the seismic code have been performed and have shown that the assumptions and modeling choices made in the Italian hazard study are in line with the observations, by considering different return period, the soil condition at the recording stations and the uncertainties of the model. A further application of Italian seismic hazard model is in the identification of buildings and factories struck by the 2012 Emilia (Italy) earthquakes to be investigated in order to determine if they were still safe or not. The law states that no safety check is needed if the construction experienced a shaking greater than 70% of the design acceleration expected at the site, without abandoning the elastic behavior. The ground motion values are evaluated from the shakemaps available (http://shakemap.rm.ingv.it) and the design accelerations derived from the Building Code, which is based on the reference Italian seismic hazard model. Finally, the national seismic hazard model was one the most debated element during the trial in L'Aquila against the seismologists, experts of Civil Protection Department, sentenced to six years in prison on charges of manslaughter, because, according to the judge, they underestimated the risk in the region, giving a wrong message to the people, before the strong 2009 L'Aquila earthquake.
New Geophysical Techniques for Offshore Exploration.
ERIC Educational Resources Information Center
Talwani, Manik
1983-01-01
New seismic techniques have been developed recently that borrow theory from academic institutions and technology from industry, allowing scientists to explore deeper into the earth with much greater precision than possible with older seismic methods. Several of these methods are discussed, including the seismic reflection common-depth-point…
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaSalle, F.R.; Golbeg, P.R.; Chenault, D.M.
For reactor and nuclear facilities, both Title 10, Code of Federal Regulations, Part 50, and US Department of Energy Order 6430.1A require assessments of the interaction of non-Safety Class 1 piping and equipment with Safety Class 1 piping and equipment during a seismic event to maintain the safety function. The safety class systems of nuclear reactors or nuclear facilities are designed to the applicable American Society of Mechanical Engineers standards and Seismic Category 1 criteria that require rigorous analysis, construction, and quality assurance. Because non-safety class systems are generally designed to lesser standards and seismic criteria, they may become missilesmore » during a safe shutdown earthquake. The resistance of piping, tubing, and equipment to seismically generated missiles is addressed in the paper. Gross plastic and local penetration failures are considered with applicable test verification. Missile types and seismic zones of influence are discussed. Field qualification data are also developed for missile evaluation.« less
Attenuation and velocity dispersion in the exploration seismic frequency band
NASA Astrophysics Data System (ADS)
Sun, Langqiu
In an anelastic medium, seismic waves are distorted by attenuation and velocity dispersion, which depend on petrophysical properties of reservoir rocks. The effective attenuation and velocity dispersion is a combination of intrinsic attenuation and apparent attenuation due to scattering, transmission response, and data acquisition system. Velocity dispersion is usually neglected in seismic data processing partly because of insufficient observations in the exploration seismic frequency band. This thesis investigates the methods of measuring velocity dispersion in the exploration seismic frequency band and interprets the velocity dispersion data in terms of petrophysical properties. Broadband, uncorrelated vibrator data are suitable for measuring velocity dispersion in the exploration seismic frequency band, and a broad bandwidth optimizes the observability of velocity dispersion. Four methods of measuring velocity dispersion in uncorrelated vibrator VSP data are investigated, which are the sliding window crosscorrelation (SWCC) method, the instantaneous phase method, the spectral decomposition method, and the cross spectrum method. Among them, the SWCC method is a new method and has satisfactory robustness, accuracy, and efficiency. Using the SWCC method, velocity dispersion is measured in the uncorrelated vibrator VSP data from three areas with different geological settings, i.e., Mallik gas hydrate zone, McArthur River uranium mines, and Outokumpu crystalline rocks. The observed velocity dispersion is fitted to a straight line with respect to log frequency for a constant (frequency-independent) Q value. This provides an alternative method for calculating Q. A constant Q value does not directly link to petrophysical properties. A modeling study is implemented for the Mallik and McArthur River data to interpret the velocity dispersion observations in terms of petrophysical properties. The detailed multi-parameter petrophysical reservoir models are built according to the well logs; the models' parameters are adjusted by fitting the synthetic data to the observed data. In this way, seismic attenuation and velocity dispersion provide new insight into petrophysics properties at the Mallik and McArthur River sites. Potentially, observations of attenuation and velocity dispersion in the exploration seismic frequency band can improve the deconvolution process for vibrator data, Q-compensation, near-surface analysis, and first break picking for seismic data.
NASA Astrophysics Data System (ADS)
Tibuleac, I. M.; Iovenitti, J. L.; Pullammanappallil, S. K.; von Seggern, D. H.; Ibser, H.; Shaw, D.; McLachlan, H.
2015-12-01
A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. Seismic interferometry was used to extract Green's Functions (P and surface waves) from 21 days of continuous ambient seismic noise. With the advantage of S-velocity models estimated from surface waves, an ambient noise seismic reflection survey along a line (named Line 2), although with lower resolution, reproduced the results of the active survey, when the ambient seismic noise was not contaminated by strong cultural noise. Ambient noise resolution was less at depth (below 1000m) compared to the active survey. Useful information could be recovered from ambient seismic noise, including dipping features and fault locations. Processing method tests were developed, with potential to improve the virtual reflection survey results. Through innovative signal processing techniques, periods not typically analyzed with high frequency sensors were used in this study to obtain seismic velocity model information to a depth of 1.4km. New seismic parameters such as Green's Function reflection component lateral variations, waveform entropy, stochastic parameters (Correlation Length and Hurst number) and spectral frequency content extracted from active and passive surveys showed potential to indicate geothermal favorability through their correlation with high temperature anomalies, and showed potential as fault indicators, thus reducing the uncertainty in fault identification. Geothermal favorability maps along ambient seismic Line 2 were generated considering temperature, lithology and the seismic parameters investigated in this study and compared to the active Line 2 results. Pseudo-favorability maps were also generated using only the seismic parameters analyzed in this study.
NASA Astrophysics Data System (ADS)
de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode
2017-01-01
Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.
He, W.; Anderson, R.N.
1998-08-25
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.
He, Wei; Anderson, Roger N.
1998-01-01
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.
Seismic analysis for translational failure of landfills with retaining walls.
Feng, Shi-Jin; Gao, Li-Ya
2010-11-01
In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.
What controls the maximum magnitude of injection-induced earthquakes?
NASA Astrophysics Data System (ADS)
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.
Automatic identification of alpine mass movements based on seismic and infrasound signals
NASA Astrophysics Data System (ADS)
Schimmel, Andreas; Hübl, Johannes
2017-04-01
The automatic detection and identification of alpine mass movements like debris flows, debris floods or landslides gets increasing importance for mitigation measures in the densely populated and intensively used alpine regions. Since this mass movement processes emits characteristically seismic and acoustic waves in the low frequency range this events can be detected and identified based on this signals. So already several approaches for detection and warning systems based on seismic or infrasound signals has been developed. But a combination of both methods, which can increase detection probability and reduce false alarms is currently used very rarely and can serve as a promising method for developing an automatic detection and identification system. So this work presents an approach for a detection and identification system based on a combination of seismic and infrasound sensors, which can detect sediment related mass movements from a remote location unaffected by the process. The system is based on one infrasound sensor and one geophone which are placed co-located and a microcontroller where a specially designed detection algorithm is executed which can detect mass movements in real time directly at the sensor site. Further this work tries to get out more information from the seismic and infrasound spectrum produced by different sediment related mass movements to identify the process type and estimate the magnitude of the event. The system is currently installed and tested on five test sites in Austria, two in Italy and one in Switzerland as well as one in Germany. This high number of test sites is used to get a large database of very different events which will be the basis for a new identification method for alpine mass movements. These tests shows promising results and so this system provides an easy to install and inexpensive approach for a detection and warning system.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).
NASA Astrophysics Data System (ADS)
Wang, Qian; Gao, Jinghuai
2018-02-01
As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.
Design risk assessment for burst-prone mines: Application in a Canadian mine
NASA Astrophysics Data System (ADS)
Cheung, David J.
A proactive stance towards improving the effectiveness and consistency of risk assessments has been adopted recently by mining companies and industry. The next 10-20 years forecasts that ore deposits accessible using shallow mining techniques will diminish. The industry continues to strive for success in "deeper" mining projects in order to keep up with the continuing demand for raw materials. Although the returns are quite profitable, many projects have been sidelined due to high uncertainty and technical risk in the mining of the mineral deposit. Several hardrock mines have faced rockbursting and seismicity problems. Within those reported, mines in countries like South Africa, Australia and Canada have documented cases of severe rockburst conditions attributed to the mining depth. Severe rockburst conditions known as "burst-prone" can be effectively managed with design. Adopting a more robust design can ameliorate the exposure of workers and equipment to adverse conditions and minimize the economic consequences, which can hinder the bottom line of an operation. This thesis presents a methodology created for assessing the design risk in burst-prone mines. The methodology includes an evaluation of relative risk ratings for scenarios with options of risk reduction through several design principles. With rockbursts being a hazard of seismic events, the methodology is based on research in the area of mining seismicity factoring in rockmass failure mechanisms, which results from a combination of mining induced stress, geological structures, rockmass properties and mining influences. The methodology was applied to case studies at Craig Mine of Xstrata Nickel in Sudbury, Ontario, which is known to contain seismically active fault zones. A customized risk assessment was created and applied to rockburst case studies, evaluating the seismic vulnerability and consequence for each case. Application of the methodology to Craig Mine demonstrates that changes in the design can reduce both exposure risk (personnel and equipment), and economical risk (revenue and costs). Fatal and catastrophic consequences can be averted through robust planning and design. Two customized approaches were developed to conduct risk assessment of case studies at Craig Mine. Firstly, the Brownfield Approach utilizes the seismic database to determine the seismic hazard from a rating system that evaluates frequency-magnitude, event size, and event-blast relation. Secondly, the Greenfield Approach utilizes the seismic database, focusing on larger magnitude events, rocktype, and geological structure. The customized Greenfield Approach can also be applied in the evaluation of design risk in deep mines with the same setting and condition as Craig Mine. Other mines with different settings and conditions can apply the principles in the methodology to evaluate design alternatives and risk reduction strategies for burst-prone mines.
Contemporary Tectonics of China
1978-02-01
that it would be of value to the United States to understand seismicity in China because their methods used in predicting large intraplate seismic...ability to discriminate between natural events and nuclear explosions. General Method In order to circumvent the limitations placed on studies of...accurate relative locations. Fault planes maybe determined with this method , thereby removing the ambiguity of the choice of fault plane from a fault plane
A Parametric Study of Nonlinear Seismic Response Analysis of Transmission Line Structures
Wang, Yanming; Yi, Zhenhua
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
Automated Processing Workflow for Ambient Seismic Recordings
NASA Astrophysics Data System (ADS)
Girard, A. J.; Shragge, J.
2017-12-01
Structural imaging using body-wave energy present in ambient seismic data remains a challenging task, largely because these wave modes are commonly much weaker than surface wave energy. In a number of situations body-wave energy has been extracted successfully; however, (nearly) all successful body-wave extraction and imaging approaches have focused on cross-correlation processing. While this is useful for interferometric purposes, it can also lead to the inclusion of unwanted noise events that dominate the resulting stack, leaving body-wave energy overpowered by the coherent noise. Conversely, wave-equation imaging can be applied directly on non-correlated ambient data that has been preprocessed to mitigate unwanted energy (i.e., surface waves, burst-like and electromechanical noise) to enhance body-wave arrivals. Following this approach, though, requires a significant preprocessing effort on often Terabytes of ambient seismic data, which is expensive and requires automation to be a feasible approach. In this work we outline an automated processing workflow designed to optimize body wave energy from an ambient seismic data set acquired on a large-N array at a mine site near Lalor Lake, Manitoba, Canada. We show that processing ambient seismic data in the recording domain, rather than the cross-correlation domain, allows us to mitigate energy that is inappropriate for body-wave imaging. We first develop a method for window selection that automatically identifies and removes data contaminated by coherent high-energy bursts. We then apply time- and frequency-domain debursting techniques to mitigate the effects of remaining strong amplitude and/or monochromatic energy without severely degrading the overall waveforms. After each processing step we implement a QC check to investigate improvements in the convergence rates - and the emergence of reflection events - in the cross-correlation plus stack waveforms over hour-long windows. Overall, the QC analyses suggest that automated preprocessing of ambient seismic recordings in the recording domain successfully mitigates unwanted coherent noise events in both the time and frequency domain. Accordingly, we assert that this method is beneficial for direct wave-equation imaging with ambient seismic recordings.
Accurate Measurement of Velocity and Acceleration of Seismic Vibrations near Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Arif, Syed Javed; Imdadullah; Asghar, Mohammad Syed Jamil
In spite of all prerequisite geological study based precautions, the sites of nuclear power plants are also susceptible to seismic vibrations and their consequent effects. The effect of the ongoing nuclear tragedy in Japan caused by an earthquake and its consequent tsunami on March 11, 2011 is currently beyond contemplations. It has led to a rethinking on nuclear power stations by various governments around the world. Therefore, the prediction of location and time of large earthquakes has regained a great importance. The earth crust is made up of several wide, thin and rigid plates like blocks which are in constant motion with respect to each other. A series of vibrations on the earth surface are produced by the generation of elastic seismic waves due to sudden rupture within the plates during the release of accumulated strain energy. The range of frequency of seismic vibrations is from 0 to 10 Hz. However, there appears a large variation in magnitude, velocity and acceleration of these vibrations. The response of existing or conventional methods of measurement of seismic vibrations is very slow, which is of the order of tens of seconds. A systematic and high resolution measurement of velocity and acceleration of these vibrations are useful to interpret the pattern of waves and their anomalies more accurately, which are useful for the prediction of an earthquake. In the proposed work, a fast rotating magnetic field (RMF) is used to measure the velocity and acceleration of seismic vibrations in the millisecond range. The broad spectrum of pulses within one second range, measured by proposed method, gives all possible values of instantaneous velocity and instantaneous acceleration of the seismic vibrations. The spectrum of pulses in millisecond range becomes available which is useful to measure the pattern of fore shocks to predict the time and location of large earthquakes more accurately. Moreover, instead of average, the peak values of these quantities are helpful in proper design of earthquake resistant nuclear power plants, buildings and structures. The proposed measurement scheme is successfully tested with a microprocessor based rocking vibration arrangement and the overall performance is recorded at dynamic conditions.
Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory
NASA Astrophysics Data System (ADS)
Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi
2018-03-01
With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.
Seismic instantaneous frequency extraction based on the SST-MAW
NASA Astrophysics Data System (ADS)
Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping
2018-06-01
The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.
USGS National Seismic Hazard Maps
Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.
2000-01-01
The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and distances.
NASA Astrophysics Data System (ADS)
Faizah Bawadi, Nor; Anuar, Shamilah; Rahim, Mustaqqim A.; Mansor, A. Faizal
2018-03-01
A conventional and seismic method for determining the ultimate pile bearing capacity was proposed and compared. The Spectral Analysis of Surface Wave (SASW) method is one of the non-destructive seismic techniques that do not require drilling and sampling of soils, was used in the determination of shear wave velocity (Vs) and damping (D) profile of soil. The soil strength was found to be directly proportional to the Vs and its value has been successfully applied to obtain shallow bearing capacity empirically. A method is proposed in this study to determine the pile bearing capacity using Vs and D measurements for the design of pile and also as an alternative method to verify the bearing capacity from the other conventional methods of evaluation. The objectives of this study are to determine Vs and D profile through frequency response data from SASW measurements and to compare pile bearing capacities obtained from the method carried out and conventional methods. All SASW test arrays were conducted near the borehole and location of conventional pile load tests. In obtaining skin and end bearing pile resistance, the Hardin and Drnevich equation has been used with reference strains obtained from the method proposed by Abbiss. Back analysis results of pile bearing capacities from SASW were found to be 18981 kN and 4947 kN compared to 18014 kN and 4633 kN of IPLT with differences of 5% and 6% for Damansara and Kuala Lumpur test sites, respectively. The results of this study indicate that the seismic method proposed in this study has the potential to be used in estimating the pile bearing capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, F.W.
1994-03-28
This bibliography is divided into the following four sections: Seismicity of Hawaii and Kilauea Volcano; Occurrence, locations and accelerations from large historical Hawaiian earthquakes; Seismic hazards of Hawaii; and Methods of seismic hazard analysis. It contains 62 references, most of which are accompanied by short abstracts.
NASA Technical Reports Server (NTRS)
Phillips, Roger J.; Grimm, Robert E.
1991-01-01
The design and ultimate success of network seismology experiments on Mars depends on the present level of Martian seismicity. Volcanic and tectonic landforms observed from imaging experiments show that Mars must have been a seismically active planet in the past and there is no reason to discount the notion that Mars is seismically active today but at a lower level of activity. Models are explored for present day Mars seismicity. Depending on the sensitivity and geometry of a seismic network and the attenuation and scattering properties of the interior, it appears that a reasonable number of Martian seismic events would be detected over the period of a decade. The thermoelastic cooling mechanism as estimated is surely a lower bound, and a more refined estimate would take into account specifically the regional cooling of Tharsis and lead to a higher frequency of seismic events.
Effect of URM infills on seismic vulnerability of Indian code designed RC frame buildings
NASA Astrophysics Data System (ADS)
Haldar, Putul; Singh, Yogendra; Paul, D. K.
2012-03-01
Unreinforced Masonry (URM) is the most common partitioning material in framed buildings in India and many other countries. Although it is well-known that under lateral loading the behavior and modes of failure of the frame buildings change significantly due to infill-frame interaction, the general design practice is to treat infills as nonstructural elements and their stiffness, strength and interaction with the frame is often ignored, primarily because of difficulties in simulation and lack of modeling guidelines in design codes. The Indian Standard, like many other national codes, does not provide explicit insight into the anticipated performance and associated vulnerability of infilled frames. This paper presents an analytical study on the seismic performance and fragility analysis of Indian code-designed RC frame buildings with and without URM infills. Infills are modeled as diagonal struts as per ASCE 41 guidelines and various modes of failure are considered. HAZUS methodology along with nonlinear static analysis is used to compare the seismic vulnerability of bare and infilled frames. The comparative study suggests that URM infills result in a significant increase in the seismic vulnerability of RC frames and their effect needs to be properly incorporated in design codes.
Survey evaluation and design (SED): A case study in Garden Banks, Gulf of Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, G.; Hannan, A.; Mann, A.D.
1995-12-31
Hydrocarbon exploration in the Gulf of Mexico has reached its mature stages. Exploration objectives such as deep stratigraphic and pre-salt traps are becoming more dominant. As the exploration targets change, earlier 3D seismic surveys, designed for different objectives, become less able to meet the demands of the present day explorations. Some areas of the Gulf of Mexico will require reacquisition of new 3D seismic data, redesigned to meet new objectives. Garden Banks is one such area. A major advantage of performing a survey evaluation design (SED) in a mature area is the amount and diversity of available data. Geological profiles,more » reservoir characterizations, borehole wireline and surface seismic data, all serve to aid in the survey design. Given the exploration history and geological objectives, the geophysical analyses of resolution, signal loss, noise, fold, acquisition geometry, migration aperture, velocity anisotropy and others, may now be carried out in a much more specific manner. A thorough SED ensures that overall survey objectives will be met and reduces the possibility of over design on critical parameters. This generates the highest quality seismic survey for the most reasonable cost.« less
Seismic risk management solution for nuclear power plants
Coleman, Justin; Sabharwall, Piyush
2014-12-01
Nuclear power plants should safely operate during normal operations and maintain core-cooling capabilities during off-normal events, including external hazards (such as flooding and earthquakes). Management of external hazards to expectable levels of risk is critical to maintaining nuclear facility and nuclear power plant safety. Seismic risk is determined by convolving the seismic hazard with seismic fragilities (capacity of systems, structures, and components). Seismic isolation (SI) is one protective measure showing promise to minimize seismic risk. Current SI designs (used in commercial industry) reduce horizontal earthquake loads and protect critical infrastructure from the potentially destructive effects of large earthquakes. The benefitmore » of SI application in the nuclear industry is being recognized and SI systems have been proposed in American Society of Civil Engineer Standard 4, ASCE-4, to be released in the winter of 2014, for light water reactors facilities using commercially available technology. The intent of ASCE-4 is to provide criteria for seismic analysis of safety related nuclear structures such that the responses to design basis seismic events, computed in accordance with this standard, will have a small likelihood of being exceeded. The U.S. nuclear industry has not implemented SI to date; a seismic isolation gap analysis meeting was convened on August 19, 2014, to determine progress on implementing SI in the U.S. nuclear industry. The meeting focused on the systems and components that could benefit from SI. As a result, this article highlights the gaps identified at this meeting.« less
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1986-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations, and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high seismic-velocity surfaces, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits,are ideally suited for applying seismic-refraction methods. These methods allow the economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies.This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of this technique in hydrologic investigations and describes the planning, equipment, field procedures, and intrepretation techniques needed for this type of study.Examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1988-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high-seismic-velocity surface, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits, are ideally suited for seismic-refraction methods. These methods allow economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies. This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of these techniques in hydrologic investigations and describes the planning, equipment, field procedures, and interpretation techniques needed for this type of study. Further-more, examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
Probabilistic seismic hazard characterization and design parameters for the Pantex Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernreuter, D. L.; Foxall, W.; Savy, J. B.
1998-10-19
The Hazards Mitigation Center at Lawrence Livermore National Laboratory (LLNL) updated the seismic hazard and design parameters at the Pantex Plant. The probabilistic seismic hazard (PSH) estimates were first updated using the latest available data and knowledge from LLNL (1993, 1998), Frankel et al. (1996), and other relevant recent studies from several consulting companies. Special attention was given to account for the local seismicity and for the system of potentially active faults associated with the Amarillo-Wichita uplift. Aleatory (random) uncertainty was estimated from the available data and the epistemic (knowledge) uncertainty was taken from results of similar studies. Special attentionmore » was given to soil amplification factors for the site. Horizontal Peak Ground Acceleration (PGA) and 5% damped uniform hazard spectra were calculated for six return periods (100 yr., 500 yr., 1000 yr., 2000 yr., 10,000 yr., and 100,000 yr.). The design parameters were calculated following DOE standards (DOE-STD-1022 to 1024). Response spectra for design or evaluation of Performance Category 1 through 4 structures, systems, and components are presented.« less
NASA Astrophysics Data System (ADS)
Niri, Mohammad Emami; Lumley, David E.
2017-10-01
Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.
NASA Astrophysics Data System (ADS)
Denli, H.; Huang, L.
2008-12-01
Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.
New approach to detect seismic surface waves in 1Hz-sampled GPS time series
Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.
2011-01-01
Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563
Radtke, Robert P; Stokes, Robert H; Glowka, David A
2014-12-02
A method for operating an impulsive type seismic energy source in a firing sequence having at least two actuations for each seismic impulse to be generated by the source. The actuations have a time delay between them related to a selected energy frequency peak of the source output. One example of the method is used for generating seismic signals in a wellbore and includes discharging electric current through a spark gap disposed in the wellbore in at least one firing sequence. The sequence includes at least two actuations of the spark gap separated by an amount of time selected to cause acoustic energy resulting from the actuations to have peak amplitude at a selected frequency.
Waveform Retrieval and Phase Identification for Seismic Data from the CASS Experiment
NASA Astrophysics Data System (ADS)
Li, Zhiwei; You, Qingyu; Ni, Sidao; Hao, Tianyao; Wang, Hongti; Zhuang, Cantao
2013-05-01
The little destruction to the deployment site and high repeatability of the Controlled Accurate Seismic Source (CASS) shows its potential for investigating seismic wave velocities in the Earth's crust. However, the difficulty in retrieving impulsive seismic waveforms from the CASS data and identifying the seismic phases substantially prevents its wide applications. For example, identification of the seismic phases and accurate measurement of travel times are essential for resolving the spatial distribution of seismic velocities in the crust. Until now, it still remains a challenging task to estimate the accurate travel times of different seismic phases from the CASS data which features extended wave trains, unlike processing of the waveforms from impulsive events such as earthquakes or explosive sources. In this study, we introduce a time-frequency analysis method to process the CASS data, and try to retrieve the seismic waveforms and identify the major seismic phases traveling through the crust. We adopt the Wigner-Ville Distribution (WVD) approach which has been used in signal detection and parameter estimation for linear frequency modulation (LFM) signals, and proves to feature the best time-frequency convergence capability. The Wigner-Hough transform (WHT) is applied to retrieve the impulsive waveforms from multi-component LFM signals, which comprise seismic phases with different arrival times. We processed the seismic data of the 40-ton CASS in the field experiment around the Xinfengjiang reservoir with the WVD and WHT methods. The results demonstrate that these methods are effective in waveform retrieval and phase identification, especially for high frequency seismic phases such as PmP and SmS with strong amplitudes in large epicenter distance of 80-120 km. Further studies are still needed to improve the accuracy on travel time estimation, so as to further promote applicability of the CASS for and imaging the seismic velocity structure.
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.; Matsushima, J.
2016-06-01
In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.
Effect of a Near Fault on the Seismic Response of a Base-Isolated Structure with a Soft Storey
NASA Astrophysics Data System (ADS)
Athamnia, B.; Ounis, A.; Abdeddaim, M.
2017-12-01
This study focuses on the soft-storey behavior of RC structures with lead core rubber bearing (LRB) isolation systems under near and far-fault motions. Under near-fault ground motions, seismic isolation devices might perform poorly because of large isolator displacements caused by large velocity and displacement pulses associated with such strong motions. In this study, four different structural models have been designed to study the effect of soft-storey behavior under near-fault and far-fault motions. The seismic analysis for isolated reinforced concrete buildings is carried out using a nonlinear time history analysis method. Inter-story drifts, absolute acceleration, displacement, base shear forces, hysteretic loops and the distribution of plastic hinges are examined as a result of the analysis. These results show that the performance of a base isolated RC structure is more affected by increasing the height of a story under nearfault motion than under far-fault motion.
Report of the Workshop on Extreme Ground Motions at Yucca Mountain, August 23-25, 2004
Hanks, T.C.; Abrahamson, N.A.; Board, M.; Boore, D.M.; Brune, J.N.; Cornell, C.A.
2006-01-01
This Workshop has its origins in the probabilistic seismic hazard analysis (PSHA) for Yucca Mountain, the designated site of the underground repository for the nation's high-level radioactive waste. In 1998 the Nuclear Regulatory Commission's Senior Seismic Hazard Analysis Committee (SSHAC) developed guidelines for PSHA which were published as NUREG/CR-6372, 'Recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and the use of experts,' (SSHAC, 1997). This Level-4 study was the most complicated and complex PSHA ever undertaken at the time. The procedures, methods, and results of this PSHA are described in Stepp et al. (2001), mostly in the context of a probability of exceedance (hazard) of 10-4/yr for ground motion at Site A, a hypothetical, reference rock outcrop site at the elevation of the proposed emplacement drifts within the mountain. Analysis and inclusion of both aleatory and epistemic uncertainty were significant and time-consuming aspects of the study, which took place over three years and involved several dozen scientists, engineers, and analysts.
Seismic signal and noise on Europa
NASA Astrophysics Data System (ADS)
Panning, Mark; Stähler, Simon; Bills, Bruce; Castillo Castellanos, Jorge; Huang, Hsin-Hua; Husker, Allen; Kedar, Sharon; Lorenz, Ralph; Pike, William T.; Schmerr, Nicholas; Tsai, Victor; Vance, Steven
2017-10-01
Seismology is one of our best tools for detailing interior structure of planetary bodies, and a seismometer is included in the baseline and threshold mission design for the upcoming Europa Lander mission. Guiding mission design and planning for adequate science return, though, requires modeling of both the anticipated signal and noise. Assuming ice seismicity on Europa behaves according to statistical properties observed in Earth catalogs and scaling cumulative seismic moment release to the moon, we can simulate long seismic records and estimate background noise and peak signal amplitudes (Panning et al., 2017). This suggests a sensitive instrument comparable to many broadband terrestrial instruments or the SP instrument from the InSight mission to Mars will be able to record signals, while high frequency geophones are likely inadequate. We extend this analysis to also begin incorporation of spatial and temporal variation due to the tidal cycle, which can help inform landing site selection. We also begin exploration of how chaotic terrane at the bottom of the ice shell and inter-ice heterogeneities (i.e. internal melt structures) may affect anticipated seismic observations using 2D numerical seismic simulations.M. P. Panning, S. C. Stähler, H.-H. Huang, S. D. Vance, S. Kedar, V. C. Tsai, W. T. Pike, R. D. Lorenz, “Expected seismicity and the seismic noise environment of Europa,” J. Geophys. Res., in revision, 2017.
NASA Astrophysics Data System (ADS)
Linzer, Lindsay; Mhamdi, Lassaad; Schumacher, Thomas
2015-01-01
A moment tensor inversion (MTI) code originally developed to compute source mechanisms from mining-induced seismicity data is now being used in the laboratory in a civil engineering research environment. Quantitative seismology methods designed for geological environments are being tested with the aim of developing techniques to assess and monitor fracture processes in structural concrete members such as bridge girders. In this paper, we highlight aspects of the MTI_Toolbox programme that make it applicable to performing inversions on acoustic emission (AE) data recorded by networks of uniaxial sensors. The influence of the configuration of a seismic network on the conditioning of the least-squares system and subsequent moment tensor results for a real, 3-D network are compared to a hypothetical 2-D version of the same network. This comparative analysis is undertaken for different cases: for networks consisting entirely of triaxial or uniaxial sensors; for both P and S-waves, and for P-waves only. The aim is to guide the optimal design of sensor configurations where only uniaxial sensors can be installed. Finally, the findings of recent laboratory experiments where the MTI_Toolbox has been applied to a concrete beam test are presented and discussed.
Near-surface structure of the Carpathian Foredeep marginal zone in the Roztocze Hills area
NASA Astrophysics Data System (ADS)
Majdański, M.; Grzyb, J.; Owoc, B.; Krogulec, T.; Wysocka, A.
2018-03-01
Shallow seismic survey was made along 1280 m profile in the marginal zone of the Carpathian Foredeep. Measurements performed with standalone wireless stations and especially designed accelerated weight drop system resulted in high fold (up to 60), long offset seismic data. The acquisition has been designed to gather both high-resolution reflection and wide-angle refraction data at long offsets. Seismic processing has been realised separately in two paths with focus on the shallow and deep structures. Data processing for the shallow part combines the travel time tomography and the wide angle reflection imaging. This difficult analysis shows that a careful manual front mute combined with correct statics leads to detailed recognition of structures between 30 and 200 m. For those depths, we recognised several SW dipping tectonic displacements and a main fault zone that probably is the main fault limiting the Roztocze Hills area, and at the same time constitutes the border of the Carpathian Forebulge. The deep interpretation clearly shows a NE dipping evaporate layer at a depth of about 500-700 m. We also show limitations of our survey that leads to unclear recognition of the first 30 m, concluding with the need of joint interpretation with other geophysical methods.
ON-SITE CAVITY LOCATION-SEISMIC PROFILING AT NEVADA TEST SITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbes, C.B.; Peterson, R.A.; Heald, C.L.
1961-10-25
Experimental seismic studies were conducted at the Nevada Test Site for the purpose of designing and evaluating the most promising seismic techniques for on-site inspection. Post-explosion seismic profiling was done in volcanic tuff in the vicinity of the Rainier and Blanca underground explosions. Pre-explosion seismic profiling was done over granitic rock outcrops in the Climax Stock area, and over tuff at proposed location for Linen and Orchid. Near surface velocity profiling techniques based on measurements of seismic time-distance curves gave evidence of disturbances in near surface rock velocities over the Rainier and Refer als0 to abstract 30187. Blanca sites. Thesemore » disturbances appear to be related to near surface fracturing and spallation effects resulting from the reflection of the original intense compression wave pulse at the near surface as a tension pulse. Large tuned seismometer arrays were used for horizontal seismic ranging in an attempt to record back-scattered'' or reflected seismic waves from subsurface cavities or zones of rock fracturing around the underground explosions. Some possible seismic events were recorded from the near vicinities of the Rainier and Blanca sites. However, many more similar events were recorded from numerous other locations, presumably originating from naturally occurring underground geological features. No means was found for discriminating between artificial and natural events recorded by horizontal seismic ranging, and the results were, therefore, not immediately useful for inspection purposes. It is concluded that in some instances near surface velocity profiling methods may provide a useful tool in verifying the presence of spalled zones above underground nuclear explosion sites. In the case of horizontal seismic ranging it appears that successful application would require development of satisfactory means for recognition of and discrimination against seismic responses to naturally occurring geological features. It is further concluded that, although more sophisticated instrumentation systems can be conceived, the most promising returns for effort expended can be expected to come from increased experience, skill, and human ingenuity in applying existing techniques. The basic problem is in large part a geological one of differentiating seismic response to man made irregularities from that of natural features which are of a similar or greater size and universally proved. It would not appear realistic to consider the seismic tool as a proven routine device for giving clear answers in on-site inspection operations. Application must still be considered largely experimental. (auth)« less
Data and Workflow Management Challenges in Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Ruan, Y.; Smith, J. A.; Modrak, R. T.; Orsvuran, R.; Krischer, L.; Chen, Y.; Balasubramanian, V.; Hill, J.; Turilli, M.; Bozdag, E.; Lefebvre, M. P.; Jha, S.; Tromp, J.
2017-12-01
It is crucial to take the complete physics of wave propagation into account in seismic tomography to further improve the resolution of tomographic images. The adjoint method is an efficient way of incorporating 3D wave simulations in seismic tomography. However, global adjoint tomography is computationally expensive, requiring thousands of wavefield simulations and massive data processing. Through our collaboration with the Oak Ridge National Laboratory (ORNL) computing group and an allocation on Titan, ORNL's GPU-accelerated supercomputer, we are now performing our global inversions by assimilating waveform data from over 1,000 earthquakes. The first challenge we encountered is dealing with the sheer amount of seismic data. Data processing based on conventional data formats and processing tools (such as SAC), which are not designed for parallel systems, becomes our major bottleneck. To facilitate the data processing procedures, we designed the Adaptive Seismic Data Format (ASDF) and developed a set of Python-based processing tools to replace legacy FORTRAN-based software. These tools greatly enhance reproducibility and accountability while taking full advantage of highly parallel system and showing superior scaling on modern computational platforms. The second challenge is that the data processing workflow contains more than 10 sub-procedures, making it delicate to handle and prone to human mistakes. To reduce human intervention as much as possible, we are developing a framework specifically designed for seismic inversion based on the state-of-the art workflow management research, specifically the Ensemble Toolkit (EnTK), in collaboration with the RADICAL team from Rutgers University. Using the initial developments of the EnTK, we are able to utilize the full computing power of the data processing cluster RHEA at ORNL while keeping human interaction to a minimum and greatly reducing the data processing time. Thanks to all the improvements, we are now able to perform iterations fast enough on more than a 1,000 earthquakes dataset. Starting from model GLAD-M15 (Bozdag et al., 2016), an elastic 3D model with a transversely isotropic upper mantle, we have successfully performed 5 iterations. Our goal is to finish 10 iterations, i.e., generating GLAD M25* by the end of this year.
NASA Astrophysics Data System (ADS)
Huang, Yin-Nan
Nuclear power plants (NPPs) and spent nuclear fuel (SNF) are required by code and regulations to be designed for a family of extreme events, including very rare earthquake shaking, loss of coolant accidents, and tornado-borne missile impacts. Blast loading due to malevolent attack became a design consideration for NPPs and SNF after the terrorist attacks of September 11, 2001. The studies presented in this dissertation assess the performance of sample conventional and base isolated NPP reactor buildings subjected to seismic effects and blast loadings. The response of the sample reactor building to tornado-borne missile impacts and internal events (e.g., loss of coolant accidents) will not change if the building is base isolated and so these hazards were not considered. The sample NPP reactor building studied in this dissertation is composed of containment and internal structures with a total weight of approximately 75,000 tons. Four configurations of the reactor building are studied, including one conventional fixed-base reactor building and three base-isolated reactor buildings using Friction Pendulum(TM), lead rubber and low damping rubber bearings. The seismic assessment of the sample reactor building is performed using a new procedure proposed in this dissertation that builds on the methodology presented in the draft ATC-58 Guidelines and the widely used Zion method, which uses fragility curves defined in terms of ground-motion parameters for NPP seismic probabilistic risk assessment. The new procedure improves the Zion method by using fragility curves that are defined in terms of structural response parameters since damage and failure of NPP components are more closely tied to structural response parameters than to ground motion parameters. Alternate ground motion scaling methods are studied to help establish an optimal procedure for scaling ground motions for the purpose of seismic performance assessment. The proposed performance assessment procedure is used to evaluate the vulnerability of the conventional and base-isolated NPP reactor buildings. The seismic performance assessment confirms the utility of seismic isolation at reducing spectral demands on secondary systems. Procedures to reduce the construction cost of secondary systems in isolated reactor buildings are presented. A blast assessment of the sample reactor building is performed for an assumed threat of 2000 kg of TNT explosive detonated on the surface with a closest distance to the reactor building of 10 m. The air and ground shock waves produced by the design threat are generated and used for performance assessment. The air blast loading to the sample reactor building is computed using a Computational Fluid Dynamics code Air3D and the ground shock time series is generated using an attenuation model for soil/rock response. Response-history analysis of the sample conventional and base isolated reactor buildings to external blast loadings is performed using the hydrocode LS-DYNA. The spectral demands on the secondary systems in the isolated reactor building due to air blast loading are greater than those for the conventional reactor building but much smaller than those spectral demands associated with Safe Shutdown Earthquake shaking. The isolators are extremely effective at filtering out high acceleration, high frequency ground shock loading.
Seismic Hazard Assessment at Esfaraen‒Bojnurd Railway, North‒East of Iran
NASA Astrophysics Data System (ADS)
Haerifard, S.; Jarahi, H.; Pourkermani, M.; Almasian, M.
2018-01-01
The objective of this study is to evaluate the seismic hazard at the Esfarayen-Bojnurd railway using the probabilistic seismic hazard assessment (PSHA) method. This method was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. Attenuation equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 1.2 × 1.2 km covering the study area, ground acceleration for every node was calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to return periods of 74, 475 and 2475 years.
Infrasound Generation from the HH Seismic Hammer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Kyle Richard
2014-10-01
The HH Seismic hammer is a large, "weight-drop" source for active source seismic experiments. This system provides a repetitive source that can be stacked for subsurface imaging and exploration studies. Although the seismic hammer was designed for seismological studies it was surmised that it might produce energy in the infrasonic frequency range due to the ground motion generated by the 13 metric ton drop mass. This study demonstrates that the seismic hammer generates a consistent acoustic source that could be used for in-situ sensor characterization, array evaluation and surface-air coupling studies for source characterization.
NASA Astrophysics Data System (ADS)
Seong-hwa, Y.; Wee, S.; Kim, J.
2016-12-01
Observed ground motions are composed of 3 main factors such as seismic source, seismic wave attenuation and site amplification. Among them, site amplification is also important factor and should be considered to estimate soil-structure dynamic interaction with more reliability. Though various estimation methods are suggested, this study used the method by Castro et. al.(1997) for estimating site amplification. This method has been extended to background noise, coda waves and S waves recently for estimating site amplification. This study applied the Castro et. al.(1997)'s method to 3 different seismic waves, that is, S-wave Energy, Background Noise, and Coda waves. This study analysed much more than about 200 ground motions (acceleration type) from the East Japan earthquake (March 11th, 2011) Series of seismic stations at Jeju Island (JJU, SGP, HALB, SSP and GOS; Fig. 1), in Korea. The results showed that most of the seismic stations gave similar results among three types of seismic energies. Each station showed its own characteristics of site amplification property in low, high and specific resonance frequency ranges. Comparison of this study to other studies can give us much information about dynamic amplification of domestic sites characteristics and site classification.
Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.
2016-08-18
This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.
Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.
2015-01-01
Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.
NASA Astrophysics Data System (ADS)
Tian, Jingjing
Low-rise woodframe buildings with disproportionately flexible ground stories represent a significant percentage of the building stock in seismically vulnerable communities in the Western United States. These structures have a readily identifiable structural weakness at the ground level due to an asymmetric distribution of large openings in the perimeter wall lines and to a lack of interior partition walls, resulting in a soft story condition that makes the structure highly susceptible to severe damage or collapse under design-level earthquakes. The conventional approach to retrofitting such structures is to increase the ground story stiffness. An alternate approach is to increase the energy dissipation capacity of the structure via the incorporation of supplemental energy dissipation devices (dampers), thereby relieving the energy dissipation demands on the framing system. Such a retrofit approach is consistent with a Performance-Based Seismic Retrofit (PBSR) philosophy through which multiple performance levels may be targeted. The effectiveness of such a retrofit is presented via examination of the seismic response of a full-scale four-story building that was tested on the outdoor shake table at NEES-UCSD and a full-scale three-story building that was tested using slow pseudo-dynamic hybrid testing at NEES-UB. In addition, a Direct Displacement Design (DDD) methodology was developed as an improvement over current DDD methods by considering torsion, with or without the implementation of damping devices, in an attempt to avoid the computational expense of nonlinear time-history analysis (NLTHA) and thus facilitating widespread application of PBSR in engineering practice.
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
FROM THE HISTORY OF PHYSICS: Georgii L'vovich Shnirman: designer of fast-response instruments
NASA Astrophysics Data System (ADS)
Bashilov, I. P.
1994-07-01
A biography is given of the outstanding Russian scientist Georgii L'vovich Shnirman, whose scientific life had been 'top secret'. He was an experimental physicist and instrument designer, the founder of many branches of the Soviet instrument-making industry, the originator of a theory of electric methods of integration and differentiation, a theory of astasisation of pendulums, and also of original measurement methods. He was the originator and designer of automatic systems for the control of the measuring apparatus used at nuclear test sites and of automatic seismic station systems employed in monitoring nuclear tests. He also designed the first loop oscilloscopes in the Soviet Union, high-speed photographic and cine cameras (streak cameras, etc.), and many other unique instruments, including some mounted on moving objects.
NASA Astrophysics Data System (ADS)
Gao, Shanghua; Xue, Bing
2017-04-01
The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.
Seismic evaluation of vulnerability for SAMA educational buildings in Tehran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amini, Omid Nassiri; Amiri, Javad Vaseghi
2008-07-08
Earthquake is a destructive phenomenon that trembles different parts of the earth yearly and causes many destructions. Iran is one of the (high seismicity) quack- prone parts of the world that has received a lot of pecuniary damages and life losses each year, schools are of the most important places to be protected during such crisis.There was no special surveillance on designing and building of school's building in Tehran till the late 70's, and as Tehran is on faults, instability of such buildings may cause irrecoverable pecuniary damages and especially life losses, therefore preventing this phenomenon is in an urgentmore » need.For this purpose, some of the schools built during 67-78 mostly with Steel braced frame structures have been selected, first, by evaluating the selected Samples, gathering information and Visual Survey, the prepared questionnaires were filled out. With the use of ARIA and SABA (Venezuela) Methods, new modified combined method for qualified evaluations was found and used.Then, for quantified evaluation, with the use of computer 3D models and nonlinear statically analysis methods, a number of selected buildings of qualified evaluation, were reevaluated and finally with nonlinear dynamic analysis method the real behavior of structures on the earthquakes is studied.The results of qualified and quantified evaluations were compared and a proper Pattern for seismic evaluation of Educational buildings was presented. Otherwise the results can be a guidance for the person in charge of retrofitting or if necessary rebuilding the schools.« less
Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E; Terán-Gilmore, Amador
2014-01-01
Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions.
Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E.; Terán-Gilmore, Amador
2014-01-01
Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions. PMID:25089288
Convolutional neural network for earthquake detection and location
Perol, Thibaut; Gharbi, Michaël; Denolle, Marine
2018-01-01
The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers
NASA Astrophysics Data System (ADS)
Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.
2017-12-01
The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.
United States National Seismic Hazard Maps
Petersen, M.D.; ,
2008-01-01
The U.S. Geological Survey?s maps of earthquake shaking hazards provide information essential to creating and updating the seismic design provisions of building codes and insurance rates used in the United States. Periodic revisions of these maps incorporate the results of new research. Buildings, bridges, highways, and utilities built to meet modern seismic design provisions are better able to withstand earthquakes, not only saving lives but also enabling critical activities to continue with less disruption. These maps can also help people assess the hazard to their homes or places of work and can also inform insurance rates.
Interpretation of Data from Uphole Refraction Surveys
1980-06-01
Seismic refraction Seismic refraction method Seismic surveys Subsurface exploration ""-. 20, AI0SrRACT -(CmtuamU 00MvaO eL If naaaaamr and Identlfyby...by the presence of subsurface cavities and large cavities are identifiable, the sensitivity of the method is marginal for practical use in cavity...detection. Some cavities large enough to be of engineering signifi- cance (e.g., a tunnel of h-m diameter) may be practically undetectable by this method
24 CFR 200.926 - Minimum property standards for one and two family dwellings.
Code of Federal Regulations, 2011 CFR
2011-04-01
... units in a structure where the units are located side-by-side in town house fashion. Section 200.926d(c... the subarea for seismic design (see § 200.926a(c)(5)), or if it fails to regulate subareas in more..., structural loads and seismic design, foundation systems, materials standards, construction components, glass...
24 CFR 200.926 - Minimum property standards for one and two family dwellings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... units in a structure where the units are located side-by-side in town house fashion. Section 200.926d(c... the subarea for seismic design (see § 200.926a(c)(5)), or if it fails to regulate subareas in more..., structural loads and seismic design, foundation systems, materials standards, construction components, glass...
Seismic signal and noise on Europa and how to use it
NASA Astrophysics Data System (ADS)
Panning, M. P.; Stähler, S. C.; Bills, B. G.; Castillo, J.; Huang, H. H.; Husker, A. L.; Kedar, S.; Lorenz, R. D.; Pike, W. T.; Schmerr, N. C.; Tsai, V. C.; Vance, S.
2017-12-01
Seismology is one of our best tools for detailing interior structure of planetary bodies, and a seismometer is included in the baseline and threshold mission design for a potential Europa lander mission. Guiding mission design and planning for adequate science return, though, requires modeling of both the anticipated signal and noise. Assuming ice seismicity on Europa behaves according to statistical properties observed in Earth catalogs and scaling cumulative seismic moment release to the moon, we simulate long seismic records and estimate background noise and peak signal amplitudes (Panning et al., 2017). This suggests a sensitive instrument comparable to many broadband terrestrial instruments or the SP instrument from the InSight mission to Mars will be able to record signals, while high frequency geophones are likely inadequate. We extend this analysis to also begin incorporation of spatial and temporal variation due to the tidal cycle, which can help inform landing site selection. We also begin exploration of how chaotic terrane at the bottom of the ice shell and inter-ice heterogeneities (i.e. internal melt structures) may affect predicted seismic observations using 2D numerical seismic simulations. We also show some of the key seismic observations to determine interior properties of Europa (Stähler et al., 2017). M. P. Panning, S. C. Stähler, H.-H. Huang, S. D. Vance, S. Kedar, V. C. Tsai, W. T. Pike, R. D. Lorenz, "Expected seismicity and the seismic noise environment of Europa," J. Geophys. Res., in revision, 2017. S. C. Stähler, M. P. Panning, S. D. Vance, R. D. Lorenz, M. van Driel, T. Nissen-Meyer, S. Kedar, "Seismic wave propagation in icy ocean worlds," J. Geophys. Res., in revision, 2017.
Elaina Jennings; John W. van de Lindt; Ershad Ziaei; Pouria Bahmani; Sangki Park; Xiaoyun Shao; Weichiang Pang; Douglas Rammer; Gary Mochizuki; Mikhail Gershfeld
2015-01-01
The FEMA P-807 Guidelines were developed for retrofitting soft-story wood-frame buildings based on existing data, and the method had not been verified through full-scale experimental testing. This article presents two different retrofit designs based directly on the FEMA P-807 Guidelines that were examined at several different seismic intensity levels. The...
Abadi, Shima H; Tolstoy, Maya; Wilcock, William S D
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations.
Abadi, Shima H.; Tolstoy, Maya; Wilcock, William S. D.
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations. PMID:28199400
Detection capability of the IMS seismic network based on ambient seismic noise measurements
NASA Astrophysics Data System (ADS)
Gaebler, Peter J.; Ceranna, Lars
2016-04-01
All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection threshold can be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.
Development of a time synchronization methodology for a wireless seismic array
NASA Astrophysics Data System (ADS)
Moure-García, David; Torres-González, Pedro; del Río, Joaquín; Mihai, Daniel; Domínguez Cerdeña, Itahiza
2017-04-01
Seismic arrays have multiple applications. In the past, the main use was nuclear tests monitoring that began in mid-twentieth century. The major difference with a seismic network is the hypocenter location procedure. With a seismic network the hypocenter's 3D coordinates are calculated while using an array, the source direction of the seismic signal is determined. Seismic arrays are used in volcanology to obtain the source azimuth of volcanic signals related to fluids movement, magma and/or gases, that do not show a clear seismic phases' onset. A key condition in the seismic array operativity is the temporal synchronization of all the sensors, better than 1 microsecond. Because of that, usually all sensors are connected to the acquisition system by cable to ensure an identical sampling time. In this work we present the design of a wireless low-cost and low-power consumption volcanic monitoring seismic array where all nodes (sensors) acquire data synchronously and transmit them to the center node where a coherent signal is pursued in near real time.
NASA Astrophysics Data System (ADS)
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.; Arámbula-Mendoza, R.; Budi-Santoso, A.
2016-11-01
Most attempts of deterministic eruption forecasting are based on the material Failure Forecast Method (FFM). This method assumes that a precursory observable, such as the rate of seismic activity, can be described by a simple power law which presents a singularity at a time close to the eruption onset. Until now, this method has been applied only in a small number of cases, generally for forecasts in hindsight. In this paper, a rigorous Bayesian approach of the FFM designed for real-time applications is applied. Using an automatic recognition system, seismo-volcanic events are detected and classified according to their physical mechanism and time series of probability distributions of the rates of events are calculated. At each time of observation, a Bayesian inversion provides estimations of the exponent of the power law and of the time of eruption, together with their probability density functions. Two criteria are defined in order to evaluate the quality and reliability of the forecasts. Our automated procedure has allowed the analysis of long, continuous seismic time series: 13 years from Volcán de Colima, Mexico, 10 years from Piton de la Fournaise, Reunion Island, France, and several months from Merapi volcano, Java, Indonesia. The new forecasting approach has been applied to 64 pre-eruptive sequences which present various types of dominant seismic activity (volcano-tectonic or long-period events) and patterns of seismicity with different level of complexity. This has allowed us to test the FFM assumptions, to determine in which conditions the method can be applied, and to quantify the success rate of the forecasts. 62% of the precursory sequences analysed are suitable for the application of FFM and half of the total number of eruptions are successfully forecast in hindsight. In real-time, the method allows for the successful forecast of 36% of all the eruptions considered. Nevertheless, real-time forecasts are successful for 83% of the cases that fulfil the reliability criteria. Therefore, good confidence on the method is obtained when the reliability criteria are met.
Wang, Z.
2007-01-01
Although the causes of large intraplate earthquakes are still not fully understood, they pose certain hazard and risk to societies. Estimating hazard and risk in these regions is difficult because of lack of earthquake records. The New Madrid seismic zone is one such region where large and rare intraplate earthquakes (M = 7.0 or greater) pose significant hazard and risk. Many different definitions of hazard and risk have been used, and the resulting estimates differ dramatically. In this paper, seismic hazard is defined as the natural phenomenon generated by earthquakes, such as ground motion, and is quantified by two parameters: a level of hazard and its occurrence frequency or mean recurrence interval; seismic risk is defined as the probability of occurrence of a specific level of seismic hazard over a certain time and is quantified by three parameters: probability, a level of hazard, and exposure time. Probabilistic seismic hazard analysis (PSHA), a commonly used method for estimating seismic hazard and risk, derives a relationship between a ground motion parameter and its return period (hazard curve). The return period is not an independent temporal parameter but a mathematical extrapolation of the recurrence interval of earthquakes and the uncertainty of ground motion. Therefore, it is difficult to understand and use PSHA. A new method is proposed and applied here for estimating seismic hazard in the New Madrid seismic zone. This method provides hazard estimates that are consistent with the state of our knowledge and can be easily applied to other intraplate regions. ?? 2007 The Geological Society of America.
Calibration method helps in seismic velocity interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, C.E.; Davenport, H.A.; Wilhelm, R.
1997-11-03
Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less
NASA Astrophysics Data System (ADS)
Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.
2017-12-01
Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.
The Utility of the Extended Images in Ambient Seismic Wavefield Migration
NASA Astrophysics Data System (ADS)
Girard, A. J.; Shragge, J. C.
2015-12-01
Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.
NASA Astrophysics Data System (ADS)
Sudarmaji; Rudianto, Indra; Eka Nurcahya, Budi
2018-04-01
A strong tectonic earthquake with a magnitude of 5.9 Richter scale has been occurred in Yogyakarta and Central Java on May 26, 2006. The earthquake has caused severe damage in Yogyakarta and the southern part of Central Java, Indonesia. The understanding of seismic response of earthquake among ground shaking and the level of building damage is important. We present numerical modeling of 3D seismic wave propagation around Yogyakarta and the southern part of Central Java using spectral-element method on MPI-GPU (Graphics Processing Unit) computer cluster to observe its seismic response due to the earthquake. The homogeneous 3D realistic model is generated with detailed topography surface. The influences of free surface topography and layer discontinuity of the 3D model among the seismic response are observed. The seismic wave field is discretized using spectral-element method. The spectral-element method is solved on a mesh of hexahedral elements that is adapted to the free surface topography and the internal discontinuity of the model. To increase the data processing capabilities, the simulation is performed on a GPU cluster with implementation of MPI (Message Passing Interface).
Lateral testing of glued laminated timber tudor arch
Douglas R. Rammer; Philip Line
2016-01-01
Glued laminated timber Tudor arches have been in wide use in the United States since the 1930s, but detailed knowledge related to seismic design in modern U.S. building codes is lacking. FEMA P-695 (P-695) is a methodology to determine seismic performance factors for a seismic force resisting system. A limited P-695 study for glued laminated timber arch structures...
Wireless acquisition of multi-channel seismic data using the Seismobile system
NASA Astrophysics Data System (ADS)
Isakow, Zbigniew
2017-11-01
This paper describes the wireless acquisition of multi-channel seismic data using a specialized mobile system, Seismobile, designed for subsoil diagnostics for transportation routes. The paper presents examples of multi-channel seismic records obtained during system tests in a configuration with 96 channels (4 landstreamers of 24-channel) and various seismic sources. Seismic waves were generated at the same point using different sources: a 5-kg hammer, a Gisco's source with a 90-kg pile-driver, and two other the pile-drivers of 45 and 70 kg. Particular attention is paid to the synchronization of source timing, the measurement of geometry by autonomous GPS systems, and the repeatability of triggering measurements constrained by an accelerometer identifying the seismic waveform. The tests were designed to the registration, reliability, and range of the wireless transmission of survey signals. The effectiveness of the automatic numbering of measuring modules was tested as the system components were arranged and fixed to the streamers. After measurements were completed, the accuracy and speed of data downloading from the internal memory (SDHC 32GB WiFi) was determined. Additionally, the functionality of automatic battery recharging, the maximum survey duration, and the reliability of battery discharge signalling were assessed.
Seismic gradiometry using ambient seismic noise in an anisotropic Earth
NASA Astrophysics Data System (ADS)
de Ridder, S. A. L.; Curtis, A.
2017-05-01
We introduce a wavefield gradiometry technique to estimate both isotropic and anisotropic local medium characteristics from short recordings of seismic signals by inverting a wave equation. The method exploits the information in the spatial gradients of a seismic wavefield that are calculated using dense deployments of seismic arrays. The application of the method uses the surface wave energy in the ambient seismic field. To estimate isotropic and anisotropic medium properties we invert an elliptically anisotropic wave equation. The spatial derivatives of the recorded wavefield are evaluated by calculating finite differences over nearby recordings, which introduces a systematic anisotropic error. A two-step approach corrects this error: finite difference stencils are first calibrated, then the output of the wave-equation inversion is corrected using the linearized impulse response to the inverted velocity anomaly. We test the procedure on ambient seismic noise recorded in a large and dense ocean bottom cable array installed over Ekofisk field. The estimated azimuthal anisotropy forms a circular geometry around the production-induced subsidence bowl. This conforms with results from studies employing controlled sources, and with interferometry correlating long records of seismic noise. Yet in this example, the results were obtained using only a few minutes of ambient seismic noise.
Demonstration of improved seismic source inversion method of tele-seismic body wave
NASA Astrophysics Data System (ADS)
Yagi, Y.; Okuwaki, R.
2017-12-01
Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.
NASA Astrophysics Data System (ADS)
Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.
2018-07-01
The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.
NASA Astrophysics Data System (ADS)
Ramírez-Rojas, Alejandro; Telesca, Luciano; Lovallo, Michele; Flores, Leticia
2015-04-01
By using the method of the visibility graph (VG), five magnitude time series extracted from the seismic catalog of the Mexican subduction zone were investigated. The five seismic sequences represent the seismicity which occurred between 2005 and 2012 in five seismic areas: Guerrero, Chiapas, Oaxaca, Jalisco and Michoacan. Among the five seismic sequences, the Jalisco sequence shows VG properties significantly different from those shown by the other four. Such a difference could be inherent in the different tectonic settings of Jalisco with respect to those characterizing the other four areas. The VG properties of the seismic sequences have been put in relationship with the more typical seismological characteristics (b-value and a-value of the Gutenberg-Richter law). The present study was supported by the Bilateral Project Italy-Mexico "Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences", jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strack, K.M.; Vozoff, K.
The applications of electromagnetics have increased in the past two decades because of an improved understanding of the methods, improves service availability, and the increased focus of exploration in the more complex reservoir characterization issues. For electromagnetic methods surface applications for hydrocarbon Exploration and Production are still a special case, while applications in borehole and airborne research and for engineering and environmental objectives are routine. In the past, electromagnetic techniques, in particular deep transient electromagnetics, made up a completely different discipline in geophysics, although many of the principles are similar to the seismic one. With an understanding of the specificmore » problems related to data processing initially and then acquisition, the inclusion of principles learned from seismics happened almost naturally. Initially, the data processing was very similar to seismic full-waveform processing. The hardware was also changed to include multichannel acquisition systems, and the field procedures became very similar to seismic surveying. As a consequence, the integration and synergism of the interpretation process is becoming almost automatic. The long-offset transient electromagnetic (LOTEM) technique will be summarized from the viewpoint of its similarity to seismics. The complete concept of the method will also be reviewed. An interpretation case history that integrates seismic and LOTEM from a hydrocarbon area in China clearly demonstrates the limitations and benefits of the method.« less
NASA Astrophysics Data System (ADS)
Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.
2018-01-01
Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.
Sensing network for electromagnetic fields generated by seismic activities
NASA Astrophysics Data System (ADS)
Gershenzon, Naum I.; Bambakidis, Gust; Ternovskiy, Igor V.
2014-06-01
The sensors network is becoming prolific and play now increasingly more important role in acquiring and processing information. Cyber-Physical Systems are focusing on investigation of integrated systems that includes sensing, networking, and computations. The physics of the seismic measurement and electromagnetic field measurement requires special consideration how to design electromagnetic field measurement networks for both research and detection earthquakes and explosions along with the seismic measurement networks. In addition, the electromagnetic sensor network itself could be designed and deployed, as a research tool with great deal of flexibility, the placement of the measuring nodes must be design based on systematic analysis of the seismic-electromagnetic interaction. In this article, we review the observations of the co-seismic electromagnetic field generated by earthquakes and man-made sources such as vibrations and explosions. The theoretical investigation allows the distribution of sensor nodes to be optimized and could be used to support existing geological networks. The placement of sensor nodes have to be determined based on physics of electromagnetic field distribution above the ground level. The results of theoretical investigations of seismo-electromagnetic phenomena are considered in Section I. First, we compare the relative contribution of various types of mechano-electromagnetic mechanisms and then analyze in detail the calculation of electromagnetic fields generated by piezomagnetic and electrokinetic effects.
NASA Astrophysics Data System (ADS)
Ciervo, C.; Becker, M.; Cole, M. C.; Coleman, T.; Mondanos, M.
2016-12-01
Measuring hydromechanical behavior in fractured rock is important for hydraulic fracturing and stimulation in petroleum reservoirs, predicting thermal effects in geothermal fields, and monitoring geologic carbon sequestration injection. We present a new method for measuring geomechanical response to fluid pressure in fractures that employs fiber optic Distributed Acoustic Sensing (DAS). DAS was designed to measure acoustic and seismic signals, often in petroleum wells. DAS seismic monitoring has been proposed as a particularly useful tool for performing seismic testing for carbon sequestration and geothermal projects because fiber optic cable is able to withstand high temperatures and pressures. DAS measures seismic vibration in the Hz to kHz frequency range by measuring strain rate in the fiber optic cable. We adapted this technology to measure rock strain in response to periodic hydraulic pulses in the mHz frequency range. A field experiment was conducted in a low-permeability fractured crystalline bedrock to test the ability of DAS to measure hydromechanical response to periodic pumping and injection. The fiber optic cable was coupled to the borehole wall using a flexible liner designed with an air coupled transducer to measure fluid pressure. Both strain and pressure were measured across a known fracture zone hydraulically connected to the pumping/injection well 30 m away. Periodic strain with amplitudes as small as 50 nm were measured in response to head amplitudes of 2 mm. Clean strain signals were detected at all tested periods of hydraulic oscillation ranging from 2 to 18 minutes. A non-linear relationship was found between opening and closing of the fracture (as measured by cable strain) and fluid pressure in the fracture. The response was also sensitive to the fiber optic cable design. This field test suggests potential for measuring hydraulic connectivity and hydromechanical behavior in fractured formations through cementing fiber optic cable in wellbores outside of well casings.
Seismic risk assessment and application in the central United States
Wang, Z.
2011-01-01
Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
Earthquake Loading Assessment to Evaluate Liquefaction Potential in Emilia-Romagna Region
NASA Astrophysics Data System (ADS)
Daminelli, R.; Marcellini, A.; Tento, A.
2016-12-01
The May-June 2012 seismic sequence that struck Lombardia and Emilia-Romagna consisted of seven main events of magnitude greater than 5 followed by numerous aftershocks. The strongest earthquakes occurred on May 20 (M=5.9) and May 29 (M=5.8). The widespread soil liquefaction, unexpected because of the moderate magnitude of the events, pushed the local authorities to issue research projects aimed to define the earthquake loading to evaluate the liquefaction safety factor. The reasons explained below led us to adopt a deterministic hazard approach to evaluate the seismic parameters relevant to liquefaction assessment, despite the fact that the Italian Seismic Building Code (NTC08) is based on probabilistic hazard analysis. For urban planning and building design geologists generally adopt the CRR/CSR technique to assess liquefaction potential; therefore we considered PGA and a design magnitude to be representative of the seismic loading. The procedure adopted consists: a) identification of seismic source zones and characterization of each zone by the maximum magnitude; b) evaluation of the source to site distance and c) adoption of a suitable attenuation law to compute the expected PGA at the site, given the site condition and the design magnitude. The design magnitude can be: the maximum magnitude; the magnitude that causes the largest PGA, or both. The PGA values obtained are larger with respect to the 474 years return period PGA prescribed by NTC08 for the seismic design for ordinary buildings. We conducted a CPTU resistance test intended to define the CRR at the village of Cavezzo, situated in the epicentral area of the 2012 earthquake. The CRR/CSR ratio led to an elevated liquefaction risk at the analysed site. On the contrary the adoption of the 474 years return period PGA of the NTCO8 prescribed for Cavezzo site led to a negligible liquefaction risk. Note that very close to the investigated site several liquefaction phenomena were observed.
Earthquake Loading Assessment to Evaluate Liquefaction Potential in Emilia-Romagna Region
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Marcellini, Alberto; Tento, Alberto
2017-04-01
The May-June 2012 seismic sequence that struck Lombardia and Emilia-Romagna consisted of seven main events of magnitude greater than 5 followed by numerous aftershocks. The strongest earthquakes occurred on May 20 (M=5.9) and May 29 (M=5.8). The widespread soil liquefaction, unexpected because of the moderate magnitude of the events, pushed the local authorities to issue research projects aimed to define the earthquake loading to evaluate the liquefaction safety factor. The reasons explained below led us to adopt a deterministic hazard approach to evaluate the seismic parameters relevant to liquefaction assessment, despite the fact that the Italian Seismic Building Code (NTC08) is based on probabilistic hazard analysis. For urban planning and building design geologists generally adopt the CRR/CSR technique to assess liquefaction potential; therefore we considered PGA and a design magnitude to be representative of the seismic loading. The procedure adopted consists: a) identification of seismic source zones and characterization of each zone by the maximum magnitude; b) evaluation of the source to site distance and c) adoption of a suitable attenuation law to compute the expected PGA at the site, given the site condition and the design magnitude. The design magnitude can be: the maximum magnitude; the magnitude that causes the largest PGA, or both. The PGA values obtained are larger with respect to the 474 years return period PGA prescribed by NTC08 for the seismic design for ordinary buildings. We conducted a CPTU resistance test intended to define the CRR at the village of Cavezzo, situated in the epicentral area of the 2012 earthquake. The CRR/CSR ratio led to an elevated liquefaction risk at the analysed site. On the contrary the adoption of the 474 years return period PGA of the NTCO8 prescribed for Cavezzo site led to a negligible liquefaction risk. Note that very close to the investigated site several liquefaction phenomena were observed.
High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas
2017-04-01
Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise mapping application is composed of four principal modules: (1) pre-processing of raw data, (2) massive cross-correlation, (3) post-processing of correlation data based on computation of logarithmic energy ratio and (4) generation of source maps from post-processed data. Implementation of the solution posed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.
NASA Astrophysics Data System (ADS)
Maity, Debotyam
This study is aimed at an improved understanding of unconventional reservoirs which include tight reservoirs (such as shale oil and gas plays), geothermal developments, etc. We provide a framework for improved fracture zone identification and mapping of the subsurface for a geothermal system by integrating data from different sources. The proposed ideas and methods were tested primarily on data obtained from North Brawley geothermal field and the Geysers geothermal field apart from synthetic datasets which were used to test new algorithms before actual application on the real datasets. The study has resulted in novel or improved algorithms for use at specific stages of data acquisition and analysis including improved phase detection technique for passive seismic (and teleseismic) data as well as optimization of passive seismic surveys for best possible processing results. The proposed workflow makes use of novel integration methods as a means of making best use of the available geophysical data for fracture characterization. The methodology incorporates soft computing tools such as hybrid neural networks (neuro-evolutionary algorithms) as well as geostatistical simulation techniques to improve the property estimates as well as overall characterization efficacy. The basic elements of the proposed characterization workflow involves using seismic and microseismic data to characterize structural and geomechanical features within the subsurface. We use passive seismic data to model geomechanical properties which are combined with other properties evaluated from seismic and well logs to derive both qualitative and quantitative fracture zone identifiers. The study has resulted in a broad framework highlighting a new technique for utilizing geophysical data (seismic and microseismic) for unconventional reservoir characterization. It provides an opportunity to optimally develop the resources in question by incorporating data from different sources and using their temporal and spatial variability as a means to better understand the reservoir behavior. As part of this study, we have developed the following elements which are discussed in the subsequent chapters: 1. An integrated characterization framework for unconventional settings with adaptable workflows for all stages of data processing, interpretation and analysis. 2. A novel autopicking workflow for noisy passive seismic data used for improved accuracy in event picking as well as for improved velocity model building. 3. Improved passive seismic survey design optimization framework for better data collection and improved property estimation. 4. Extensive post-stack seismic attribute studies incorporating robust schemes applicable in complex reservoir settings. 5. Uncertainty quantification and analysis to better quantify property estimates over and above the qualitative interpretations made and to validate observations independently with quantified uncertainties to prevent erroneous interpretations. 6. Property mapping from microseismic data including stress and anisotropic weakness estimates for integrated reservoir characterization and analysis. 7. Integration of results (seismic, microseismic and well logs) from analysis of individual data sets for integrated interpretation using predefined integration framework and soft computing tools.
NASA Astrophysics Data System (ADS)
Melkumyan, Mikayel G.
2011-03-01
It is obvious that the problem of precise assessment and/or analysis of seismic hazard (SHA) is quite a serious issue, and seismic risk reduction considerably depends on it. It is well known that there are two approaches in seismic hazard analysis, namely, deterministic (DSHA) and probabilistic (PSHA). The latter utilizes statistical estimates of earthquake parameters. However, they may not exist in a specific region, and using PSHA it is difficult to take into account local aspects, such as specific regional geology and site effects, with sufficient precision. For this reason, DSHA is preferable in many cases. After the destructive 1988 Spitak earthquake, the SHA of the territory of Armenia has been revised and increased. The distribution pattern for seismic risk in Armenia is given. Maximum seismic risk is concentrated in the region of the capital, the city of Yerevan, where 40% of the republic's population resides. We describe the method used for conducting seismic resistance assessment of the existing reinforced concrete (R/C) buildings. Using this assessment, as well as GIS technology, the coefficients characterizing the seismic risk of destruction were calculated for almost all buildings of Yerevan City. The results of the assessment are presented. It is concluded that, presently, there is a particularly pressing need for strengthening existing buildings. We then describe non-conventional approaches to upgrading the earthquake resistance of existing multistory R/C frame buildings by means of Additional Isolated Upper Floor (AIUF) and of existing stone and frame buildings by means of base isolation. In addition, innovative seismic isolation technologies were developed and implemented in Armenia for construction of new multistory multifunctional buildings. The advantages of these technologies are listed in the paper. It is worth noting that the aforementioned technologies were successfully applied for retrofitting an existing 100-year-old bank building in Irkutsk (Russia), for retrofit design of an existing 177-year-old municipality building in Iasi (Romania) and for construction of a new clinic building in Stepanakert (Nagorno Karabakh). Short descriptions of these projects are presented. Since 1994 the total number of base and roof isolated buildings constructed, retrofitted or under construction in Armenia, has reached 32. Statistics of seismically isolated buildings are given in the paper. The number of base isolated buildings per capita in Armenia is one of the highest in the world. In Armenia, for the first time in history, retrofitting of existing buildings by base isolation was carried out without interruption in the use of the buildings. The description of different base isolated buildings erected in Armenia, as well as the description of the method of retrofitting of existing buildings which is patented in Armenia (M. G. Melkumyan, patent of the Republic of Armenia No. 579), are also given in the paper.
NASA Astrophysics Data System (ADS)
Yang, H.; Sinha, S. K.; Feng, Y.; Jeremic, B.
2016-12-01
The M5.8 earthquake occurred in Pawnee, Oklahoma on September 3rd 2016 is the strongest seismic event recorded in Oklahoma. Soil structure interaction (SSI) played an important role in this tragic event. As a major aspect of SSI analysis, the propagation and dissipation of seismic energy will be studied in depth, with particular focus on the ground motion recorded in this earthquake. Seismic energy propagates from seismic source to the SSI system and is dissipated within and around the SSI system. Energy dissipation with the SSI system is related to inelastic behavior of soil, rock, contact zone (foundation-soil/rock), structural components and energy dissipators. Accurate evaluation of seismic energy can be used to optimize SSI system for safety and economy. The SSI system can be designed so that majority of seismic energy is dissipated within soil and soil-foundation contact zone, away from the structure.Accurate and theoretically sound modeling of propagation and dissipation is essential to use of seismic energy for design and assessment. The rate of plastic work is defined as the product of stress and the rate of plastic strain. On the other hand, plastic dissipation is defined as a form of heat transfer. The difference between these two quantities, which has been neglected in many studies, is a plastic part of the free energy. By considering energy storage and dissipation at both micro (particle) scale and macro (continuum) scale, it can be shown that the plastic free energy is an intrinsic attribute at the continuum scale due to particle rearrangement. Proper application of thermodynamics to finite element simulations, plastic dissipation can be correctly modeled. Examples will be used to illustrate above point on both constitutive, single element and SSI model scales. In addition, propagation of seismic energy, its dissipation (timing and location) will be used to illustrate use in design and assessment.
Detecting aseismic strain transients from seismicity data
Llenos, A.L.; McGuire, J.J.
2011-01-01
Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.
Alternative Energy Sources in Seismic Methods
NASA Astrophysics Data System (ADS)
Tün, Muammer; Pekkan, Emrah; Mutlu, Sunay; Ecevitoğlu, Berkan
2015-04-01
When the suitability of a settlement area is investigated, soil-amplification, liquefaction and fault-related hazards should be defined, and the associated risks should be clarified. For this reason, soil engineering parameters and subsurface geological structure of a new settlement area should be investigated. Especially, faults covered with quaternary alluvium; thicknesses, shear-wave velocities and geometry of subsurface sediments could lead to a soil amplification during an earthquake. Likewise, changes in shear-wave velocities along the basin are also very important. Geophysical methods can be used to determine the local soil properties. In this study, use of alternative seismic energy sources when implementing seismic reflection, seismic refraction and MASW methods in the residential areas of Eskisehir/Turkey, were discussed. Our home developed seismic energy source, EAPSG (Electrically-Fired-PS-Gun), capable to shoot 2x24 magnum shotgun cartridges at once to generate P and S waves; and our home developed WD-500 (500 kg Weight Drop) seismic energy source, mounted on a truck, were developed under a scientific research project of Anadolu University. We were able to reach up to penetration depths of 1200 m for EAPSG, and 800 m for WD-500 in our seismic reflection surveys. WD-500 seismic energy source was also used to perform MASW surveys, using 24-channel, 10 m apart, 4.5 Hz vertical geophone configuration. We were able to reach 100 m of penetration depth in MASW surveys.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
Analysing seismic-source mechanisms by linear-programming methods.
Julian, B.R.
1986-01-01
Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the 6-D space of moment-tensor components. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes. -from Author
Method for determining formation quality factor from seismic data
Taner, M. Turhan; Treitel, Sven
2005-08-16
A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W.; Benz, H.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
NASA Astrophysics Data System (ADS)
Jahangardi, Morteza; Hafezi Moghaddas, Naser; Keivan Hosseini, Sayyed; Garazhian, Omran
2015-04-01
We applied the seismic refraction method at archaeological site, Tepe Damghani located in Sabzevar, NE of Iran, in order to determine the structures of archaeological interests. This pre-historical site has special conditions with respect to geographical location and geomorphological setting, so it is an urban archaeological site, and in recent years it has been used as an agricultural field. In spring and summer of 2012, the third season of archaeological excavation was carried out. Test trenches of excavations in this site revealed that cultural layers were often disturbed adversely due to human activities such as farming and road construction in recent years. Conditions of archaeological cultural layers in southern and eastern parts of Tepe are slightly better, for instance, in test trench 3×3 m²1S03, third test trench excavated in the southern part of Tepe, an adobe in situ architectural structure was discovered that likely belongs to cultural features of a complex with 5 graves. After conclusion of the third season of archaeological excavation, all of the test trenches were filled with the same soil of excavated test trenches. Seismic refraction method was applied with12 channels of P geophones in three lines with a geophone interval of 0.5 meter and a 1.5 meter distance between profiles on test trench 1S03. The goal of this operation was evaluation of applicability of seismic method in identification of archaeological features, especially adobe wall structures. Processing of seismic data was done with the seismic software, SiesImager. Results were presented in the form of seismic section for every profile, so that identification of adobe wall structures was achieved hardly. This could be due to that adobe wall had been built with the same materials of the natural surrounding earth. Thus, there is a low contrast and it has an inappropriate effect on seismic processing and identifying of archaeological features. Hence the result could be that application of the seismic method in order to determine the archaeological features, having the same conditions, is not affordable and efficient in comparison to GPR or magnetic methods which yield more desirable results.
A comparison of Q-factor estimation methods for marine seismic data
NASA Astrophysics Data System (ADS)
Kwon, J.; Ha, J.; Shin, S.; Chung, W.; Lim, C.; Lee, D.
2016-12-01
The seismic imaging technique draws information from inside the earth using seismic reflection and transmission data. This technique is an important method in geophysical exploration. Also, it has been employed widely as a means of locating oil and gas reservoirs because it offers information on geological media. There is much recent and active research into seismic attenuation and how it determines the quality of seismic imaging. Seismic attenuation is determined by various geological characteristics, through the absorption or scattering that occurs when the seismic wave passes through a geological medium. The seismic attenuation can be defined using an attenuation coefficient and represented as a non-dimensional variable known as the Q-factor. Q-factor is a unique characteristic of a geological medium. It is a very important material property for oil and gas resource development. Q-factor can be used to infer other characteristics of a medium, such as porosity, permeability and viscosity, and can directly indicate the presence of hydrocarbons to identify oil and gas bearing areas from the seismic data. There are various ways to estimate Q-factor in three different domains. In the time domain, pulse amplitude decay, pulse rising time, and pulse broadening are representative. Logarithm spectral ratio (LSR), centroid frequency shift (CFS), and peak frequency shift (PFS) are used in the frequency domain. In the time-frequency domain, Wavelet's Envelope Peak Instantaneous Frequency (WEPIF) is most frequently employed. In this study, we estimated and analyzed the Q-factor through the numerical model test and used 4 methods: the LSR, CFS, PFS, and WEPIF. Before we applied these 4 methods to observed data, we experimented with the numerical model test. The numerical model test data is derived from Norsar-2D, which is the basis of the ray-tracing algorithm, and we used reflection and normal incidence surveys to calculate Q-factor according to the array of sources and receivers. After the numerical model test, we chose the most accurate of the 4 methods by comparing Q-factor through reflection and normal incidence surveys. We applied the method to the observed data and proved its accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolisetti, Chandrakanth; Yu, Chingching; Coleman, Justin
This report provides a framework for assessing the benefits of seismic isolation and exercises the framework on a Generic Department of Energy Nuclear Facility (GDNF). These benefits are (1) reduction in the risk of unacceptable seismic performance and a dramatic reduction in the probability of unacceptable performance at beyond-design basis shaking, and (2) a reduction in capital cost at sites with moderate to high seismic hazard. The framework includes probabilistic risk assessment and estimates of overnight capital cost for the GDNF.
Aerospace technology can be applied to exploration 'back on earth'. [offshore petroleum resources
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1977-01-01
Applications of aerospace technology to petroleum exploration are described. Attention is given to seismic reflection techniques, sea-floor mapping, remote geochemical sensing, improved drilling methods and down-hole acoustic concepts, such as down-hole seismic tomography. The seismic reflection techniques include monitoring of swept-frequency explosive or solid-propellant seismic sources, as well as aerial seismic surveys. Telemetry and processing of seismic data may also be performed through use of aerospace technology. Sea-floor sonor imaging and a computer-aided system of geologic analogies for petroleum exploration are also considered.
Inverting seismic data for rock physical properties; Mathematical background and application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farfour, Mohammed; Yoon, Wang Jung; Kim, Jinmo
2016-06-08
The basic concept behind seismic inversion is that mathematical assumptions can be established to relate seismic to geological formation properties that caused their seismic responses. In this presentation we address some widely used seismic inversion method in hydrocarbon reservoirs identification and characterization. A successful use of the inversion in real example from gas sand reservoir in Boonsville field, Noth Central Texas is presented. Seismic data was not unambiguous indicator of reservoir facies distribution. The use of the inversion led to remove the ambiguity and reveal clear information about the target.
Discrimination of porosity and fluid saturation using seismic velocity analysis
Berryman, James G.
2001-01-01
The method of the invention is employed for determining the state of saturation in a subterranean formation using only seismic velocity measurements (e.g., shear and compressional wave velocity data). Seismic velocity data collected from a region of the formation of like solid material properties can provide relatively accurate partial saturation data derived from a well-defined triangle plotted in a (.rho./.mu., .lambda./.mu.)-plane. When the seismic velocity data are collected over a large region of a formation having both like and unlike materials, the method first distinguishes the like materials by initially plotting the seismic velocity data in a (.rho./.lambda., .mu./.lambda.)-plane to determine regions of the formation having like solid material properties and porosity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tibuleac, Ileana
2016-06-30
A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. The material included in this report demonstrates that, with the advantage of initial S-velocity models estimated from ambient noise surface waves, the seismic reflection survey, although with lower resolution, reproduces the results of the active survey when the ambient seismic noise is not contaminated by strong cultural noise. Ambient noise resolution is less at depth (below 1000m) compared to the active survey. In general, the results are promising and useful information can be recovered from ambient seismic noise,more » including dipping features and fault locations.« less
NASA Astrophysics Data System (ADS)
Lestari, Titik; Nugraha, Andri Dian
2015-04-01
Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA's) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 - April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vs and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lestari, Titik, E-mail: t2klestari@gmail.com; Faculty of Earth Science and Technology, Bandung Institute of Technology, Jalan Ganesa No.10, Bandung 40132; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
2015-04-24
Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA’s) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 – April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vsmore » and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.« less
A Novel Approach to Constrain Near-Surface Seismic Wave Speed Based on Polarization Analysis
NASA Astrophysics Data System (ADS)
Park, S.; Ishii, M.
2016-12-01
Understanding the seismic responses of cities around the world is essential for the risk assessment of earthquake hazards. One of the important parameters is the elastic structure of the sites, in particular, near-surface seismic wave speed, that influences the level of ground shaking. Many methods have been developed to constrain the elastic structure of the populated sites or urban basins, and here, we introduce a new technique based on analyzing the polarization content or the three-dimensional particle motion of seismic phases arriving at the sites. Polarization analysis of three-component seismic data was widely used up to about two decades ago, to detect signals and identify different types of seismic arrivals. Today, we have good understanding of the expected polarization direction and ray parameter for seismic wave arrivals that are calculated based on a reference seismic model. The polarization of a given phase is also strongly sensitive to the elastic wave speed immediately beneath the station. This allows us to compare the observed and predicted polarization directions of incoming body waves and infer the near-surface wave speed. This approach is applied to High-Sensitivity Seismograph Network in Japan, where we benchmark the results against the well-log data that are available at most stations. There is a good agreement between our estimates of seismic wave speeds and those from well logs, confirming the efficacy of the new method. In most urban environments, where well logging is not a practical option for measuring the seismic wave speeds, this method can provide a reliable, non-invasive, and computationally inexpensive estimate of near-surface elastic properties.
Source Physics Experiment: Research in Support of Verification and Nonproliferation
2011-09-01
designed to provide a carefully controlled seismic and strong motion data set from buried explosions at the Nevada National Security Site (NNSS). The...deposition partitioned into internal (heat and plastic strain) and kinetic (e.g., radiated seismic ) energy, giving more confidence in predicted free...ample information to study dry and water-saturated fractures, local lithology and topography on the radiated seismic wavefield. Spallation on
Earthquakes: Risk, Monitoring, Notification, and Research
2007-02-02
Global Seismic Network (GSN). The GSN is a system of broadband digital seismographs arrayed around the globe and designed to collect high-quality...39 states face some risk from earthquakes. Seismic hazards are greatest in the western United States, particularly California, Alaska, Washington...Oregon, and Hawaii. The Rocky Mountain region, a portion of the central United States known as the New Madrid Seismic Zone, and portions of the eastern
SeisCORK Engineering Design Study
2006-05-01
Stephen, R. A., et al. (1994a), The seafloor borehole array seismic system (SEABASS) and VLF ambient noise, Marine Geophysical Researches, 16, 243...286. Stephen, R. A., et al. (1994b), The Seafloor Borehole Array Seismic System (SEABASS) and VLF Ambient Noise, Marine Geophysical Researches, 16, 243...Contents Executive Summary 4 Introduction 5 General Science Goals and Justification for Borehole Seismology in the Seafloor 6 Validating Surface Seismic
NASA Astrophysics Data System (ADS)
Nakata, Mitsuhiko; Tanimoto, Shunsuke; Ishida, Shuichi; Ohsumi, Michio; Hoshikuma, Jun-ichi
2017-10-01
There is risk of bridge foundations to be damaged by liquefaction-induced lateral spreading of ground. Once bridge foundations have been damaged, it takes a lot of time for restoration. Therefore, it is important to assess the seismic behavior of the foundations on liquefiable ground appropriately. In this study, shaking table tests of models on a scale of 1/10 were conducted at the large scale shaking table in Public Works Research Institute, Japan, to investigate the seismic behavior of pile-supported bridge abutment on liquefiable ground. The shaking table tests were conducted for three types of model. Two are models of existing bridge which was built without design for liquefaction and the other is a model of bridge which was designed based on the current Japanese design specifications for highway bridges. As a result, the bending strains of piles of the abutment which were designed based on the current design specifications were less than those of the existing bridge.
Seismic, satellite, and site observations of internal solitary waves in the NE South China Sea.
Tang, Qunshu; Wang, Caixia; Wang, Dongxiao; Pawlowicz, Rich
2014-06-20
Internal solitary waves (ISWs) in the NE South China Sea (SCS) are tidally generated at the Luzon Strait. Their propagation, evolution, and dissipation processes involve numerous issues still poorly understood. Here, a novel method of seismic oceanography capable of capturing oceanic finescale structures is used to study ISWs in the slope region of the NE SCS. Near-simultaneous observations of two ISWs were acquired using seismic and satellite imaging, and water column measurements. The vertical and horizontal length scales of the seismic observed ISWs are around 50 m and 1-2 km, respectively. Wave phase speeds calculated from seismic observations, satellite images, and water column data are consistent with each other. Observed waveforms and vertical velocities also correspond well with those estimated using KdV theory. These results suggest that the seismic method, a new option to oceanographers, can be further applied to resolve other important issues related to ISWs.
Architecture of high-rise buildings as a brand of the modern Kazakhstan
NASA Astrophysics Data System (ADS)
Abdrassilova, Gulnara; Kozbagarova, Nina; Tuyakayeva, Ainagul
2018-03-01
Using practical examples article reviews urban-planning and space-planning features of design and construction of high-rise buildings in Kazakhstan conditions; methods are identified that provide for structural stability against wind and seismic loads based on innovative technical and technological solutions. Article authors stress out the fashion function of high-rise buildings in the new capital of Kazakhstan, the Astana city.
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.
2018-06-01
Induced seismicity typically arises from the progressive activation of recently inactive geological faults by anthropogenic activity. Faults are mechanically and geometrically heterogeneous, so their extremes of stress and strength govern the initial evolution of induced seismicity. We derive a statistical model of Coulomb stress failures and associated aftershocks within the tail of the distribution of fault stress and strength variations to show initial induced seismicity rates will increase as an exponential function of induced stress. Our model provides operational forecasts consistent with the observed space-time-magnitude distribution of earthquakes induced by gas production from the Groningen field in the Netherlands. These probabilistic forecasts also match the observed changes in seismicity following a significant and sustained decrease in gas production rates designed to reduce seismic hazard and risk. This forecast capability allows reliable assessment of alternative control options to better inform future induced seismic risk management decisions.
H-fractal seismic metamaterial with broadband low-frequency bandgaps
NASA Astrophysics Data System (ADS)
Du, Qiujiao; Zeng, Yi; Xu, Yang; Yang, Hongwu; Zeng, Zuoxun
2018-03-01
The application of metamaterial in civil engineering to achieve isolation of a building by controlling the propagation of seismic waves is a substantial challenge because seismic waves, a superposition of longitudinal and shear waves, are more complex than electromagnetic and acoustic waves. In this paper, we design a broadband seismic metamaterial based on H-shaped fractal pillars and report numerical simulation of band structures for seismic surface waves propagating. Comparative study on the band structures of H-fractal seismic metamaterials with different levels shows that a new level of fractal structure creates new band gap, widens the total band gaps and shifts the same band gap towards lower frequencies. Moreover, the vibration modes for H-fractal seismic metamaterials are computed and analyzed to clarify the mechanism of widening band gaps. A numerical investigation of seismic surface waves propagation on a 2D array of fractal unit cells on the surface of semi-infinite substrate is proposed to show the efficiency of earthquake shielding in multiple complete band gaps.
Pick- and waveform-based techniques for real-time detection of induced seismicity
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Scarabello, Luca; Böse, Maren; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2018-05-01
The monitoring of induced seismicity is a common operation in many industrial activities, such as conventional and non-conventional hydrocarbon production or mining and geothermal energy exploitation, to cite a few. During such operations, we generally collect very large and strongly noise-contaminated data sets that require robust and automated analysis procedures. Induced seismicity data sets are often characterized by sequences of multiple events with short interevent times or overlapping events; in these cases, pick-based location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution and incorrect magnitudes, which can have significant consequences if real-time seismicity information are used for risk assessment frameworks. To overcome these issues, different waveform-based methods for the detection and location of microseismicity have been proposed. The main advantages of waveform-based methods is that they appear to perform better and can simultaneously detect and locate seismic events providing high-quality locations in a single step, while the main disadvantage is that they are computationally expensive. Although these methods have been applied to different induced seismicity data sets, an extensive comparison with sophisticated pick-based detection methods is still missing. In this work, we introduce our improved waveform-based detector and we compare its performance with two pick-based detectors implemented within the SeiscomP3 software suite. We test the performance of these three approaches with both synthetic and real data sets related to the induced seismicity sequence at the deep geothermal project in the vicinity of the city of St. Gallen, Switzerland.
NASA Astrophysics Data System (ADS)
Jaksch, Katrin; Giese, Rüdiger; Kopf, Matthias
2010-05-01
In the case of drilling for deep reservoirs previous exploration is indispensable. In recent years the focus shifted more on geological structures like small layers or hydrothermal fault systems. Beside 2D- or 3D-seismics from the surface and seismic measurements like Vertical Seismic Profile (VSP) or Seismic While Drilling (SWD) within a borehole these methods cannot always resolute this structures. The resolution is worsen the deeper and smaller the sought-after structures are. So, potential horizons like small layers in oil exploration or fault zones usable for geothermal energy production could be failed or not identified while drilling. The application of a device to explore the geology with a high resolution ahead of the drill bit in direction of drilling would be of high importance. Such a device would allow adjusting the drilling path according to the real geology and would minimize the risk of discovery and hence the costs for drilling. Within the project SPWD a device for seismic exploration ahead of the drill bit will be developed. This device should allow the seismic exploration to predict areas about 50 to 100 meters ahead of the drill bit with a resolution of one meter. At the GFZ a first prototype consisting of different units for seismic sources, receivers and data loggers has been designed and manufactured. As seismic sources four standard magnetostrictive actuators and as receivers four 3-component-geophones are used. Every unit, actuator or geophone, can be rotated in steps of 15° around the longitudinal axis of the prototype to test different measurement configurations. The SPWD prototype emits signal frequencies of about 500 up to 5000 Hz which are significant higher than in VSP and SWD. An increased radiation of seismic wave energy in the direction of the borehole axis allows the view in areas to be drilled. Therefore, every actuator must be controlled independently of each other regarding to amplitude and phase of the source signal to maximize the energy of the seismic source in order to reach a sufficient exploration range. The next step for focusing is to use the method of phased array. Dependent of the seismic wave velocities of the surrounding rock, the distance of the actuators to each other and the used frequencies the signal phases for each actuator can be determined. Since one year several measurements with the prototype have been realized under defined conditions at a test site in a mine. The test site consists of a rock block surrounded from three galleries with a dimension of about 100 by 200 meters. For testing the prototype two horizontal boreholes were drilled. They are directed to one of the gallery to get a strong reflector. The quality of the data of the borehole seismics in amplitude and frequency spectra show overall a good signal-to-noise ratio and correlate strongly with the fracture density along the borehole and are associated with a lower signal-to-noise ratio. Additionally, the geophones of the prototype show reflections from ahead and rearward in the seismic data. In particular, the reflections from the gallery ahead are used for the calibration of focusing. The direct seismic wave field indicates distinct compression and shear waves. The analysis of several seismic measurements with a focus on the direct seismic waves shows that the phased array technology explicit can influence the directional characteristics of the radiated seimic waves. The amplitudes of the seismic waves can be enhanced up to three times more in the desired direction and simultaneously be attenuated in the reverse direction. A major step for the directional investigation in boreholes has accomplished. But the focusing of the seismic waves has to be improved to maximize the energy in the desired direction in more measurements by calibrating the initiating seismic signals of the sources. A next step this year is the development of a wireline prototype for application in vertical boreholes with depths not more than 2000 meters are planned. The prototype must be modified and adapted to the conditions in deep boreholes with respect to pressure and temperature. This project is funded by the German Federal Environment Ministry.
Geophysical Monitoring Methods Evaluation for the FutureGen 2.0 Project
Strickland, Chris E.; USA, Richland Washington; Vermeul, Vince R.; ...
2014-12-31
A comprehensive monitoring program will be needed in order to assess the effectiveness of carbon sequestration at the FutureGen 2.0 carbon capture and storage (CCS) field-site. Geophysical monitoring methods are sensitive to subsurface changes that result from injection of CO 2 and will be used for: (1) tracking the spatial extent of the free phase CO 2 plume, (2) monitoring advancement of the pressure front, (3) identifying or mapping areas where induced seismicity occurs, and (4) identifying and mapping regions of increased risk for brine or CO 2 leakage from the reservoir. Site-specific suitability and cost effectiveness were evaluated formore » a number of geophysical monitoring methods including: passive seismic monitoring, reflection seismic imaging, integrated surface deformation, time-lapse gravity, pulsed neutron capture logging, cross-borehole seismic, electrical resistivity tomography, magnetotellurics and controlled source electromagnetics. The results of this evaluation indicate that CO 2 injection monitoring using reflection seismic methods would likely be difficult at the FutureGen 2.0 site. Electrical methods also exhibited low sensitivity to the expected CO 2 saturation changes and would be affected by metallic infrastructure at the field site. Passive seismic, integrated surface deformation, time-lapse gravity, and pulsed neutron capture monitoring were selected for implementation as part of the FutureGen 2.0 storage site monitoring program.« less
Geophysical Monitoring Methods Evaluation for the FutureGen 2.0 Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strickland, Chris E.; USA, Richland Washington; Vermeul, Vince R.
A comprehensive monitoring program will be needed in order to assess the effectiveness of carbon sequestration at the FutureGen 2.0 carbon capture and storage (CCS) field-site. Geophysical monitoring methods are sensitive to subsurface changes that result from injection of CO 2 and will be used for: (1) tracking the spatial extent of the free phase CO 2 plume, (2) monitoring advancement of the pressure front, (3) identifying or mapping areas where induced seismicity occurs, and (4) identifying and mapping regions of increased risk for brine or CO 2 leakage from the reservoir. Site-specific suitability and cost effectiveness were evaluated formore » a number of geophysical monitoring methods including: passive seismic monitoring, reflection seismic imaging, integrated surface deformation, time-lapse gravity, pulsed neutron capture logging, cross-borehole seismic, electrical resistivity tomography, magnetotellurics and controlled source electromagnetics. The results of this evaluation indicate that CO 2 injection monitoring using reflection seismic methods would likely be difficult at the FutureGen 2.0 site. Electrical methods also exhibited low sensitivity to the expected CO 2 saturation changes and would be affected by metallic infrastructure at the field site. Passive seismic, integrated surface deformation, time-lapse gravity, and pulsed neutron capture monitoring were selected for implementation as part of the FutureGen 2.0 storage site monitoring program.« less
New "Risk-Targeted" Seismic Maps Introduced into Building Codes
Luco, Nicholas; Garrett, B.; Hayes, J.
2012-01-01
Throughout most municipalities of the United States, structural engineers design new buildings using the U.S.-focused International Building Code (IBC). Updated editions of the IBC are published every 3 years. The latest edition (2012) contains new "risk-targeted maximum considered earthquake" (MCER) ground motion maps, which are enabling engineers to incorporate a more consistent and better defined level of seismic safety into their building designs.
Evaluating geophysical lithology determination methods in the central offshore Nile Delta, Egypt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nada, H.; Shrallow, J.
1994-12-31
Two post stack and one prestack geophysical techniques were used to extract lithology and fluid information from seismic data. The purpose of this work was to evaluate the effectiveness of such methods in helping to find more hydrocarbons and reduce exploration risk in Egypt`s Nile Delta. Amplitude Variations with Offset (AVO) was used as a direct hydrocarbon indicator. CDP gathers were sorted into common angle gathers. The angle traces from 0--10 degrees were stacked to form a near angle stack and those from 30--40 degrees were stacked to form a far angle stack. Comparison of the far and near anglemore » stacks indicate areas which have seismic responses that match gas bearing sand models in the Pliocene and Messinian. Seismic Sequence Attribute mapping was used to measure the reflectivity of a seismic sequence. The specific sequence attribute measured in this study was the Maximum Absolute Amplitude of the seismic reflections within a sequence. Post stack seismic inversion was used to convert zero phase final migrated data to pseudo acoustic impedance data to interpret lithology from seismic data. All three methods are useful in the Nile Delta for identifying sand prone areas, but only AVO can be used to detect fluid content.« less
40 CFR 146.95 - Class VI injection depth waiver requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...
40 CFR 146.95 - Class VI injection depth waiver requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...
40 CFR 146.95 - Class VI injection depth waiver requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and/or down-hole carbon dioxide... injection zone(s); and indirect methods (e.g., seismic, electrical, gravity, or electromagnetic surveys and...
The use of Graphic User Interface for development of a user-friendly CRS-Stack software
NASA Astrophysics Data System (ADS)
Sule, Rachmat; Prayudhatama, Dythia; Perkasa, Muhammad D.; Hendriyana, Andri; Fatkhan; Sardjito; Adriansyah
2017-04-01
The development of a user-friendly Common Reflection Surface (CRS) Stack software that has been built by implementing Graphical User Interface (GUI) is described in this paper. The original CRS-Stack software developed by WIT Consortium is compiled in the unix/linux environment, which is not a user-friendly software, so that a user must write the commands and parameters manually in a script file. Due to this limitation, the CRS-Stack become a non popular method, although applying this method is actually a promising way in order to obtain better seismic sections, which have better reflector continuity and S/N ratio. After obtaining successful results that have been tested by using several seismic data belong to oil companies in Indonesia, it comes to an idea to develop a user-friendly software in our own laboratory. Graphical User Interface (GUI) is a type of user interface that allows people to interact with computer programs in a better way. Rather than typing commands and module parameters, GUI allows the users to use computer programs in much simple and easy. Thus, GUI can transform the text-based interface into graphical icons and visual indicators. The use of complicated seismic unix shell script can be avoided. The Java Swing GUI library is used to develop this CRS-Stack GUI. Every shell script that represents each seismic process is invoked from Java environment. Besides developing interactive GUI to perform CRS-Stack processing, this CRS-Stack GUI is design to help geophysicists to manage a project with complex seismic processing procedures. The CRS-Stack GUI software is composed by input directory, operators, and output directory, which are defined as a seismic data processing workflow. The CRS-Stack processing workflow involves four steps; i.e. automatic CMP stack, initial CRS-Stack, optimized CRS-Stack, and CRS-Stack Supergather. Those operations are visualized in an informative flowchart with self explanatory system to guide the user inputting the parameter values for each operation. The knowledge of CRS-Stack processing procedure is still preserved in the software, which is easy and efficient to be learned. The software will still be developed in the future. Any new innovative seismic processing workflow will also be added into this GUI software.
,
1999-01-01
This report assesses the status, needs, and associated costs of seismic monitoring in the United States. It sets down the requirement for an effective, national seismic monitoring strategy and an advanced system linking national, regional, and urban monitoring networks. Modernized seismic monitoring can provide alerts of imminent strong earthquake shaking; rapid assessment of distribution and severity of earthquake shaking (for use in emergency response); warnings of a possible tsunami from an offshore earthquake; warnings of volcanic eruptions; information for correctly characterizing earthquake hazards and for improving building codes; and data on response of buildings and structures during earthquakes, for safe, cost-effective design, engineering, and construction practices in earthquake-prone regions.
Seismic Performance of Self-Consolidating Concrete Bridge Columns
DOT National Transportation Integrated Search
2017-09-01
The high amount of confining lateral steel required by seismic design provisions for rectangular bridge columns can cause steel congestion. The high amount of confining steel may hinder the placement of conventional concrete (CC). Self-consolidating ...
Seismic Methods of Identifying Explosions and Estimating Their Yield
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models and our ability to understand and predict where methods of identifying explosions and estimating their yield work well, and any circumstances where they may not.
Strong Motion Instrumentation of Seismically-Strengthened Port Structures in California by CSMIP
Huang, M.J.; Shakal, A.F.
2009-01-01
The California Strong Motion Instrumentation Program (CSMIP) has instrumented five port structures. Instrumentation of two more port structures is underway and another one is in planning. Two of the port structures have been seismically strengthened. The primary goals of the strong motion instrumentation are to obtain strong earthquake shaking data for verifying seismic analysis procedures and strengthening schemes, and for post-earthquake evaluations of port structures. The wharves instrumented by CSMIP were recommended by the Strong Motion Instrumentation Advisory Committee, a committee of the California Seismic Safety Commission. Extensive instrumentation of a wharf is difficult and would be impossible without the cooperation of the owners and the involvement of the design engineers. The instrumentation plan for a wharf is developed through study of the retrofit plans of the wharf, and the strong-motion sensors are installed at locations where specific instrumentation objectives can be achieved and access is possible. Some sensor locations have to be planned during design; otherwise they are not possible to install after construction. This paper summarizes the two seismically-strengthened wharves and discusses the instrumentation schemes and objectives. ?? 2009 ASCE.
Analytical Prediction of the Seismic Response of a Reinforced Concrete Containment Vessel
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, R.J.; Rashid, Y.R.; Cherry, J.L.
Under the sponsorship of the Ministry of International Trade and Industry (MITI) of Japan, the Nuclear Power Engineering Corporation (NUPEC) is investigating the seismic behavior of a Reinforced Concrete Containment Vessel (RCCV) through scale-model testing using the high-performance shaking table at the Tadotsu Engineering Laboratory. A series of tests representing design-level seismic ground motions was initially conducted to gather valuable experimental measurements for use in design verification. Additional tests will be conducted with increasing amplifications of the seismic input until a structural failure of the test model occurs. In a cooperative program with NUPEC, the US Nuclear Regulatory Commission (USNRC),more » through Sandia National Laboratories (SNL), is conducting analytical research on the seismic behavior of RCCV structures. As part of this program, pretest analytical predictions of the model tests are being performed. The dynamic time-history analysis utilizes a highly detailed concrete constitutive model applied to a three-dimensional finite element representation of the test structure. This paper describes the details of the analysis model and provides analysis results.« less
Seismic design parameters - A user guide
Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.
2001-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM
NASA Astrophysics Data System (ADS)
An, L.; Zhang, J.; Gong, L.
2018-04-01
Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.
Study on vulnerability matrices of masonry buildings of mainland China
NASA Astrophysics Data System (ADS)
Sun, Baitao; Zhang, Guixin
2018-04-01
The degree and distribution of damage to buildings subjected to earthquakes is a concern of the Chinese Government and the public. Seismic damage data indicates that seismic capacities of different types of building structures in various regions throughout mainland China are different. Furthermore, the seismic capacities of the same type of structure in different regions may vary. The contributions of this research are summarized as follows: 1) Vulnerability matrices and earthquake damage matrices of masonry structures in mainland China were chosen as research samples. The aim was to analyze the differences in seismic capacities of sample matrices and to present general rules for categorizing seismic resistance. 2) Curves relating the percentage of damaged masonry structures with different seismic resistances subjected to seismic demand in different regions of seismic intensity (VI to X) have been developed. 3) A method has been proposed to build vulnerability matrices of masonry structures. The damage ratio for masonry structures under high-intensity events such as the Ms 6.1 Panzhihua earthquake in Sichuan province on 30 August 2008, was calculated to verify the applicability of this method. This research offers a significant theoretical basis for predicting seismic damage and direct loss assessment of groups of buildings, as well as for earthquake disaster insurance.
a Comparative Case Study of Reflection Seismic Imaging Method
NASA Astrophysics Data System (ADS)
Alamooti, M.; Aydin, A.
2017-12-01
Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.
MASW on the standard seismic prospective scale using full spread recording
NASA Astrophysics Data System (ADS)
Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej
2015-04-01
The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.
Evaluating the Reverse Time Migration Method on the dense Lapnet / Polenet seismic array in Europe
NASA Astrophysics Data System (ADS)
Dupont, Aurélien; Le Pichon, Alexis
2013-04-01
In this study, results are obtained using the reverse time migration method used as benchmark to evaluate the implemented method by Walker et al., (2010, 2011). Explosion signals recorded by the USArray and extracted from the TAIRED catalogue (TA Infrasound Reference Event Database user community / Vernon et al., 2012) are investigated. The first one is an explosion at Camp Minden, Louisiana (2012-10-16 04:25:00 UTC) and the second one is a natural gas explosion near Price, Utah (2012-11-20 15:20:00 UTC). We compare our results to automatic solutions (www.iris.edu/spud/infrasoundevent). The good agreement between both solutions validates our detection method. In a second time, we analyse data from the Lapnet / Polenet dense seismic network (Kozlovskaya et al., 2008). Detection and location in two-dimensional space and time of infrasound events presumably due to acoustic-to-seismic coupling, during the 2007-2009 period in Europe, are presented. The aim of this work is to integrate near-real time network performance predictions at regional scales to improve automatic detection of infrasonic sources. The use of dense seismic networks provides a valuable tool to monitor infrasonic phenomena, since seismic location has recently proved to be more accurate than infrasound locations due to the large number of seismic sensors.
The New Italian Seismic Hazard Model
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.
2017-12-01
In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.
NASA Astrophysics Data System (ADS)
Bachura, Martin; Fischer, Tomas
2014-05-01
Seismic waves are attenuated by number of factors, including geometrical spreading, scattering on heterogeneities and intrinsic loss due the anelasticity of medium. Contribution of the latter two processes can be derived from the tail part of the seismogram - coda (strictly speaking S-wave coda), as these factors influence the shape and amplitudes of coda. Numerous methods have been developed for estimation of attenuation properties from the decay rate of coda amplitudes. Most of them work with the S-wave coda, some are designed for the P-wave coda (only on teleseismic distances) or for the whole waveforms. We used methods to estimate the 1/Qc - attenuation of coda waves, methods to separate scattering and intrinsic loss - 1/Qsc, Qi and methods to estimate attenuation of direct P and S wave - 1/Qp, 1/Qs. In this study, we analyzed the S-wave coda of local earthquake data recorded in the West Bohemia/Vogtland area. This region is well known thanks to the repeated occurrence of earthquake swarms. We worked with data from the 2011 earthquake swarm, which started late August and lasted with decreasing intensity for another 4 months. During the first week of swarm thousands of events were detected with maximum magnitudes ML = 3.6. Amount of high quality data (including continuous datasets and catalogues with an abundance of well-located events) is available due to installation of WEBNET seismic network (13 permanent and 9 temporary stations) monitoring seismic activity in the area. Results of the single-scattering model show seismic attenuations decreasing with frequency, what is in agreement with observations worldwide. We also found decrease of attenuation with increasing hypocentral distance and increasing lapse time, which was interpreted as a decrease of attenuation with depth (coda waves on later lapse times are generated in bigger depths - in our case in upper lithosphere, where attenuations are small). We also noticed a decrease of frequency dependence of 1/Qc with depth, where 1/Qc seems to be frequency independent in depth range of upper lithosphere. Lateral changes of 1/Qc were also reported - it decreases in the south-west direction from the Novy Kostel focal zone, where the attenuation is the highest. Results from more advanced methods that allow for separation of scattering and intrinsic loss show that intrinsic loss is a dominant factor for attenuating of seismic waves in the region. Determination of attenuation due to scattering appears ambiguous due to small hypocentral distances available for the analysis, where the effects of scattering in frequency range from 1 to 24 Hz are not significant.
NASA Astrophysics Data System (ADS)
Ivanova, Alexandra; Kempka, Thomas; Huang, Fei; Diersch [Gil], Magdalena; Lüth, Stefan
2016-04-01
3D time-lapse seismic surveys (4D seismic) have proven to be a suitable technique for monitoring of injected CO2, because when CO2 replaces brine as a free gas it considerably affects elastic properties of porous media. Forward modeling of a 4D seismic response to the CO2-fluid substitution in a storage reservoir is an inevitable step in such studies. At the Ketzin pilot site (CO2 storage) 67 kilotons of CO2 were injected into a saline aquifer between 2008 and 2013. In order to track migration of CO2 at Ketzin, 3D time-lapse seismic data were acquired by means of a baseline pre-injection survey in 2005 and 3 monitor surveys: in 2009, 2012 and in 2015 (the 1st post-injection survey). Results of the 4D seismic forward modeling with the reflectivity method suggest that effects of the injected CO2 on the 4D seismic data at Ketzin are significant regarding both seismic amplitudes and time delays. These results prove the corresponding observations in the real 4D seismic data at the Ketzin pilot site. But reservoir heterogeneity and seismic resolution, as well as random and coherent seismic noise are negative factors to be considered in this interpretation. Results of the 4D seismic forward modeling with the reflectivity method support the conclusion that even small amounts of injected CO2 can be monitored in such post-injected saline aquifer as the CO2 storage reservoir at the Ketzin pilot site both qualitatively and quantitatively with considerable uncertainties (Lüth et al., 2015). Reference: Lueth, S., Ivanova, A., Kempka, T. (2015): Conformity assessment of monitoring and simulation of CO2 storage: A case study from the Ketzin pilot site. - International Journal of Greenhouse Gas Control, 42, p. 329-339.
NASA Astrophysics Data System (ADS)
Gaebler, P. J.; Ceranna, L.
2016-12-01
All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection thresholdcan be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.
Focal mechanism determination for induced seismicity using the neighbourhood algorithm
NASA Astrophysics Data System (ADS)
Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong
2018-06-01
Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
NASA Astrophysics Data System (ADS)
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
Developments in seismic monitoring for risk reduction
Celebi, M.
2007-01-01
This paper presents recent state-of-the-art developments to obtain displacements and drift ratios for seismic monitoring and damage assessment of buildings. In most cases, decisions on safety of buildings following seismic events are based on visual inspections of the structures. Real-time instrumental measurements using GPS or double integration of accelerations, however, offer a viable alternative. Relevant parameters, such as the type of connections and structural characteristics (including storey geometry), can be estimated to compute drifts corresponding to several pre-selected threshold stages of damage. Drift ratios determined from real-time monitoring can then be compared to these thresholds in order to estimate damage conditions drift ratios. This approach is demonstrated in three steel frame buildings in San Francisco, California. Recently recorded data of strong shaking from these buildings indicate that the monitoring system can be a useful tool in rapid assessment of buildings and other structures following an earthquake. Such systems can also be used for risk monitoring, as a method to assess performance-based design and analysis procedures, for long-term assessment of structural characteristics of a building, and as a possible long-term damage detection tool.
Evaluation of Seismic Risk of Siberia Territory
NASA Astrophysics Data System (ADS)
Seleznev, V. S.; Soloviev, V. M.; Emanov, A. F.
The outcomes of modern geophysical researches of the Geophysical Survey SB RAS, directed on study of geodynamic situation in large industrial and civil centers on the territory of Siberia with the purpose of an evaluation of seismic risk of territories and prediction of origin of extreme situations of natural and man-caused character, are pre- sented in the paper. First of all it concerns the testing and updating of a geoinformation system developed by Russian Emergency Ministry designed for calculations regarding the seismic hazard and response to distructive earthquakes. The GIS database contains the catalogues of earthquakes and faults, seismic zonation maps, vectorized city maps, information on industrial and housing fund, data on character of building and popula- tion in inhabited places etc. The geoinformation system allows to solve on a basis of probabilistic approaches the following problems: - estimating the earthquake impact, required forces, facilities and supplies for life-support of injured population; - deter- mining the consequences of failures on chemical and explosion-dangerous objects; - optimization problems on assurance technology of conduct of salvage operations. Using this computer program, the maps of earthquake risk have been constructed for several seismically dangerous regions of Siberia. These maps display the data on the probable amount of injured people and relative economic damage from an earthquake, which can occur in various sites of the territory according to the map of seismic zona- tion. The obtained maps have allowed determining places where the detailed seismo- logical observations should be arranged. Along with it on the territory of Siberia the wide-ranging investigations with use of new methods of evaluation of physical state of industrial and civil establishments (buildings and structures, hydroelectric power stations, bridges, dams, etc.), high-performance detailed electromagnetic researches of ground conditions of city territories, roads, runways, etc., studying of seismic con- dition in large industrial and civil centers and others.
Hazard Assessment in a Big Data World
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir; Nekrasova, Anastasia
2017-04-01
Open data in a Big Data World provides unprecedented opportunities for enhancing scientific studies and better understanding of the Earth System. At the same time, it opens wide avenues for deceptive associations in inter- and transdisciplinary data misleading to erroneous predictions, which are unacceptable for implementation. Even the advanced tools of data analysis may lead to wrong assessments when inappropriately used to describe the phenomenon under consideration. A (self-) deceptive conclusion could be avoided by verification of candidate models in experiments on empirical data and in no other way. Seismology is not an exception. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in early history of instrumental seismology can be proved erroneous when subjected to objective hypothesis testing. In many cases of seismic hazard assessment (SHA), either probabilistic or deterministic, term-less or short-term, the claims of a high potential of a model forecasts are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers, which situation creates numerous deception points and resulted controversies. So far, most, if not all, the standard probabilistic methods to assess seismic hazard and associated risks are based on subjective, commonly unrealistic, and even erroneous assumptions about seismic recurrence and none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Accurate testing against real observations must be done in advance claiming seismically hazardous areas and/or times. The set of errors of the first and second kind in such a comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a user-defined cost-benefit function. The information obtained in testing experiments may supply us with realistic estimates of confidence and accuracy of SHA predictions. If proved reliable, but not necessarily perfect, forecast/prediction related recommendations on the level of risks in regard to engineering design, insurance, and emergency management can be used for efficient decision making.
Modifications to risk-targeted seismic design maps for subduction and near-fault hazards
Liel, Abbie B.; Luco, Nicolas; Raghunandan, Meera; Champion, C.; Haukaas, Terje
2015-01-01
ASCE 7-10 introduced new seismic design maps that define risk-targeted ground motions such that buildings designed according to these maps will have 1% chance of collapse in 50 years. These maps were developed by iterative risk calculation, wherein a generic building collapse fragility curve is convolved with the U.S. Geological Survey hazard curve until target risk criteria are met. Recent research shows that this current approach may be unconservative at locations where the tectonic environment is much different than that used to develop the generic fragility curve. This study illustrates how risk-targeted ground motions at selected sites would change if generic building fragility curve and hazard assessment were modified to account for seismic risk from subduction earthquakes and near-fault pulses. The paper also explores the difficulties in implementing these changes.
Development of Towed Marine Seismic Vibrator as an Alternative Seismic Source
NASA Astrophysics Data System (ADS)
Ozasa, H.; Mikada, H.; Murakami, F.; Jamali Hondori, E.; Takekawa, J.; Asakawa, E.; Sato, F.
2015-12-01
The principal issue with respect to marine impulsive sources to acquire seismic data is if the emission of acoustic energy inflicts harm on marine mammals or not, since the volume of the source signal being released into the marine environment could be so large compared to the sound range of the mammals. We propose a marine seismic vibrator as an alternative to the impulsive sources to mitigate a risk of the impact to the marine environment while satisfying the necessary conditions of seismic surveys. These conditions include the repeatability and the controllability of source signals both in amplitude and phase for high-quality measurements. We, therefore, designed a towed marine seismic vibrator (MSV) as a new type marine vibratory seismic source that employed the hydraulic servo system for the controllability condition in phase and in amplitude that assures the repeatability as well. After fabricating a downsized MSV that requires the power of 30 kVA at a depth of about 250 m in water, several sea trials were conducted to test the source characteristics of the downsized MSV in terms of amplitude, frequency, horizontal and vertical directivities of the generated field. The maximum sound level satisfied the designed specification in the frequencies ranging from 3 to 300 Hz almost omnidirectionally. After checking the source characteristics, we then conducted a trial seismic survey, using both the downsized MSV and an airgun of 480 cubic-inches for comparison, with a streamer cable of 2,000m long right above a cabled earthquake observatory in the Japan Sea. The result showed that the penetration of seismic signals generated by the downsized MSV was comparable to that by the airgun, although there was a slight difference in the signal-to-noise ratio. The MSV could become a versatile source that will not harm living marine mammals as an alternative to the existing impulsive seismic sources such as airgun.
Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites
NASA Astrophysics Data System (ADS)
Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.
2017-12-01
How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.
Seismic Design of ITER Component Cooling Water System-1 Piping
NASA Astrophysics Data System (ADS)
Singh, Aditya P.; Jadhav, Mahesh; Sharma, Lalit K.; Gupta, Dinesh K.; Patel, Nirav; Ranjan, Rakesh; Gohil, Guman; Patel, Hiren; Dangi, Jinendra; Kumar, Mohit; Kumar, A. G. A.
2017-04-01
The successful performance of ITER machine very much depends upon the effective removal of heat from the in-vessel components and other auxiliary systems during Tokamak operation. This objective will be accomplished by the design of an effective Cooling Water System (CWS). The optimized piping layout design is an important element in CWS design and is one of the major design challenges owing to the factors of large thermal expansion and seismic accelerations; considering safety, accessibility and maintainability aspects. An important sub-system of ITER CWS, Component Cooling Water System-1 (CCWS-1) has very large diameter of pipes up to DN1600 with many intersections to fulfill the process flow requirements of clients for heat removal. Pipe intersection is the weakest link in the layout due to high stress intensification factor. CCWS-1 piping up to secondary confinement isolation valves as well as in-between these isolation valves need to survive a Seismic Level-2 (SL-2) earthquake during the Tokamak operation period to ensure structural stability of the system in the Safe Shutdown Earthquake (SSE) event. This paper presents the design, qualification and optimization of layout of ITER CCWS-1 loop to withstand SSE event combined with sustained and thermal loads as per the load combinations defined by ITER and allowable limits as per ASME B31.3, This paper also highlights the Modal and Response Spectrum Analyses done to find out the natural frequency and system behavior during the seismic event.
Towards the Implementation of Semi-Dynamic Datum for Malaysia
NASA Astrophysics Data System (ADS)
Shariff, N. S.; Gill, J.; Amin, Z. M.; Omar, K. M.
2017-10-01
A semi-dynamic datum provides positions with respect to time while taking into account the secular and non-secular deformations, making it the best approach to adapt with the dynamic processes of the earth. Malaysia, as yet, employs a static datum, i.e., GDM2000, at epoch 2000; though Malaysia has evidently been affected by seismic activity for the past decade. Therefore, this paper seeks to propose a design for implementing a semi-dynamic datum for Malaysia. Methodologically, GPS time series analyses are carried out to investigate the seismic activity of Malaysia, which essentially contributes to the proposed design of the semi-dynamic datum for Malaysia. The implications of implementing a semi-dynamic datum for Malaysia are discussed as well. The results indicate that Malaysia undergoes a complex deformation; whereby the earthquakes - primarily the 2004 Sumatra-Andaman, 2005 Nias and 2012 Northern Sumatra earthquakes - have affected the underlying secular velocities of Malaysia. Consequently, from this information, the proposed design, particularly the secular and non-secular deformation models, is described in detail. The proposed semi-dynamic datum comprises a transformation, temporal, and spatial module, and utilizes a bilinear interpolation method. Overall, this paper aims to contribute to the feasibility of a semi-dynamic datum approach for Malaysia.
NASA Astrophysics Data System (ADS)
Busby, Robert; Frassetto, Andy; Hafner, Katrin; Woodward, Robert; Sauter, Allan
2013-04-01
In preparation for deployment of EarthScope's USArray Transportable Array (TA) in Alaska beginning in 2014, the National Science Foundation (NSF) is supporting exploratory work on seismic station design, sensor emplacement and communication concepts appropriate for the challenging high-latitude environment that is proposed for deployment. IRIS has installed several experimental stations to evaluate different sensor emplacement schemes both in Alaska and the lower-48 U.S. The goal of these tests is to maintain or enhance a station's noise performance while minimizing its footprint and the equipment, materials, and overall expense required for its construction. Motivating this approach are recent developments in posthole broadband seismometer design and the unique conditions for operating in Alaska, where there are few roads, cellular communications are scarce, most areas are only accessible by small plane or helicopter, and permafrost underlies much of the northern tundra. In this study we review our methods used for directly emplacing of broadband seismometers in comparison to the current methods used to deploy TA stations. These primarily focus on using an auger to drill three to five meters, beneath the active layer of the permafrost, or coring directly into surface bedrock to one meter depth using a portable drill. Both methods have proven logistically effective in trials. Subsequent station performance can be quantitatively assessed using probability density functions summed from power spectral density estimates. These are calculated for the continuous time series of seismic data recorded for each channel of the seismometer. There are five test stations currently operating in Alaska. One was deployed in August 2011 and the remaining four in October 2012. Our results show that the performance of seismometers in Alaska with auger-hole or core-hole installations equals or exceeds that of the quietest TA stations in the lower-48, particularly at long periods, and in exceptional cases approaches the performance of the GSN low noise model. The station at Poker Flat Research Range, Alaska co-locates a sensor in a 5 meter deep auger hole with a 2 meter deep TA tank installation typical of the lower-48. The augered seismometer is currently over 20 dB quieter at periods over 40 seconds than the TA tank installation. Similar performance has been observed at other TA stations, which also compare favorably to co-located permanent stations.
NASA Astrophysics Data System (ADS)
Anggraeni, Novia Antika
2015-04-01
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com
The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration ofmore » the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.« less
Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.
2012-01-01
In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.
Seismic performance evaluation of RC frame-shear wall structures using nonlinear analysis methods
NASA Astrophysics Data System (ADS)
Shi, Jialiang; Wang, Qiuwei
To further understand the seismic performance of reinforced concrete (RC) frame-shear wall structures, a 1/8 model structure is scaled from a main factory structure with seven stories and seven bays. The model with four-stories and two-bays was pseudo-dynamically tested under six earthquake actions whose peak ground accelerations (PGA) vary from 50gal to 400gal. The damage process and failure patterns were investigated. Furthermore, nonlinear dynamic analysis (NDA) and capacity spectrum method (CSM) were adopted to evaluate the seismic behavior of the model structure. The top displacement curve, story drift curve and distribution of hinges were obtained and discussed. It is shown that the model structure had the characteristics of beam-hinge failure mechanism. The two methods can be used to evaluate the seismic behavior of RC frame-shear wall structures well. What’s more, the NDA can be somewhat replaced by CSM for the seismic performance evaluation of RC structures.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun
Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.
Microseismicity of Blawan hydrothermal complex, Bondowoso, East Java, Indonesia
NASA Astrophysics Data System (ADS)
Maryanto, S.
2018-03-01
Peak Ground Acceleration (PGA), hypocentre, and epicentre of Blawan hydrothermal complex have been analysed in order to investigate its seismicity. PGA has been determined based on Fukushima-Tanaka method and the source location of microseismic estimated using particle motion method. PGA ranged between 0.095-0.323 g and tends to be higher in the formation that containing not compacted rocks. The seismic vulnerability index region indicated that the zone with high PGA also has a high seismic vulnerability index. This was because the rocks making up these zones were inclined soft and low-density rocks. For seismic sources around the area, epicentre and hypocentre, have estimated base on seismic particle motion method of single station. The stations used in this study were mobile stations identified as BL01, BL02, BL03, BL05, BL06, BL07 and BL08. The results of the analysis particle motion obtained 44 points epicentre and the depth of the sources about 15 – 110 meters below ground surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Chen, Ting; Tan, Sirui
Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less
Passive Seismic for Hydrocarbon Indicator : Between Expectation and Reality
NASA Astrophysics Data System (ADS)
Pandito, Riky H. B.
2018-03-01
In between 5 – 10 years, in our country, passive seismic method became more popular to finding hydrocarbon. Low price, nondestructive acquisition and easy to mobilization is the best reason for choose the method. But in the other part, some people are pessimistically to deal with the result. Instrument specification, data condition and processing methods is several points which influence characteristic and interpretation passive seismic result. In 2010 one prospect in East Java Basin has been measurement constist of 112 objective points and several calibration points. Data measurement results indicate a positive response. Furthermore, in 2013 exploration drliing conducted on the prospect. Drill steam test showes 22 MMCFD in objective zone, upper – late oligocene. In 2015, remeasurement taken in objective area and show consistent responses with previous measurement. Passive seismic is unique method, sometimes will have difference results on dry, gas and oil area, in field production and also temporary suspend area with hidrocarbon content.
Previous laboratory investigations have demonstrated that the seismic methods are sensitive to microbially-induced changes in porous media through the generation of biogenic gases and biomineralization. The seismic signatures associated with microbial growth and biofilm formation...
Prospective testing of neo-deterministic seismic hazard scenarios for the Italian territory
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Vaccari, Franco; Kossobokov, Vladimir; Panza, Giuliano F.
2013-04-01
A reliable and comprehensive characterization of expected seismic ground shaking, eventually including the related time information, is essential in order to develop effective mitigation strategies and increase earthquake preparedness. Moreover, any effective tool for SHA must demonstrate its capability in anticipating the ground shaking related with large earthquake occurrences, a result that can be attained only through rigorous verification and validation process. So far, the major problems in classical probabilistic methods for seismic hazard assessment, PSHA, consisted in the adequate description of the earthquake recurrence, particularly for the largest and sporadic events, and of the attenuation models, which may be unable to account for the complexity of the medium and of the seismic sources and are often weekly constrained by the available observations. Current computational resources and physical knowledge of the seismic waves generation and propagation processes allow nowadays for viable numerical and analytical alternatives to the use of attenuation relations. Accordingly, a scenario-based neo-deterministic approach, NDSHA, to seismic hazard assessment has been proposed, which allows considering a wide range of possible seismic sources as the starting point for deriving scenarios by means of full waveforms modeling. The method does not make use of attenuation relations and naturally supplies realistic time series of ground shaking, including reliable estimates of ground displacement readily applicable to seismic isolation techniques. Based on NDSHA, an operational integrated procedure for seismic hazard assessment has been developed, that allows for the definition of time dependent scenarios of ground shaking, through the routine updating of formally defined earthquake predictions. The integrated NDSHA procedure for seismic input definition, which is currently applied to the Italian territory, combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of ground motion. Accordingly, a set of deterministic scenarios of ground motion at bedrock, which refers to the time interval when a strong event is likely to occur within the alerted area, can be defined by means of full waveform modeling, both at regional and local scale. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are regularly updated every two months since 2006. The routine application of the time-dependent NDSHA approach provides information that can be useful in assigning priorities for timely mitigation actions and, at the same time, allows for a rigorous prospective testing and validation of the proposed methodology. As an example, for sites where ground shaking values greater than 0.2 g are estimated at bedrock, further investigations can be performed taking into account the local soil conditions, to assess the performances of relevant structures, such as historical and strategic buildings. The issues related with prospective testing and validation of the time-dependent NDSHA scenarios will be discussed, illustrating the results obtained for the recent strong earthquakes in Italy, including the May 20, 2012 Emilia earthquake.
Engineering for Autonomous Seismic Stations at the IRIS PASSCAL Instrument Center
NASA Astrophysics Data System (ADS)
Anderson, K. R.; Carpenter, P.; Beaudoin, B. C.; Parker, T.; Hebert, J.; Childs, D.; Chung, P.; Reusch, A. M.
2015-12-01
The NSF funded Incorporated Research Institutions for Seismology (IRIS) through New Mexico Tech operates the PASSCAL Instrument Center (PIC) in Socorro New Mexico. The engineering effort at the PIC seeks to optimize seismic station operations for all portable experiments, include those in extremely remote and harsh polar environments. Recent advances have resulted in improved station design, allowing improved operational efficiencies, data quality return and reduction in station logistics associated with installation, maintenance and decommissioning of stations. These include: Battery and power system designs. Incorporating primary Lithium Thionyl Chloride (LTC) technology with rechargeable Lithium Iron Phosphate (LiFePO4) batteries allows systems to operate in areas with long-term solar autonomy (high latitudes). Development includes charge controller systems to switch between primary and secondary technologies efficiently. Enclosures: Engineered solutions to efficiently manage waste heat, maintain operational environment and provide light-weight and durable housing for seismic instrumentation. Communications: In collaboration with Xeos Technologies Inc., we deliver Iridium-based SOH/Command and Control telemetry as well as full bandwidth seismic data communications in high latitude environments at low power requirements. Smaller-lighter-instrumentation: Through the GEOICE MRI, we are working with Nanometrics on next generation "all-in-one" seismic systems that can be deployed in polar environments - easing logistics, minimizing installation time and improving data quality return for these expensive deployments. All autonomous station designs are openly and freely available at the IRIS PASSCAL webpage (www.passcal.nmt.edu/polar/design-drawings). More information on GEOICE and data quality from various seismometer emplacements will be presented in other posters at this AGU meeting.
Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals
NASA Astrophysics Data System (ADS)
Bilgin, Huseyin; Frangu, Idlir
2017-09-01
Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.
Simultaneous multi-component seismic denoising and reconstruction via K-SVD
NASA Astrophysics Data System (ADS)
Hou, Sian; Zhang, Feng; Li, Xiangyang; Zhao, Qiang; Dai, Hengchang
2018-06-01
Data denoising and reconstruction play an increasingly significant role in seismic prospecting for their value in enhancing effective signals, dealing with surface obstacles and reducing acquisition costs. In this paper, we propose a novel method to denoise and reconstruct multicomponent seismic data simultaneously. This method lies within the framework of machine learning and the key points are defining a suitable weight function and a modified inner product operator. The purpose of these two processes are to perform missing data machine learning when the random noise deviation is unknown, and building a mathematical relationship for each component to incorporate all the information of multi-component data. Two examples, using synthetic and real multicomponent data, demonstrate that the new method is a feasible alternative for multi-component seismic data processing.
Continuous Seismic Threshold Monitoring
1992-05-31
Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic
U.S. Seismic Design Maps Web Application
NASA Astrophysics Data System (ADS)
Martinez, E.; Fee, J.
2015-12-01
The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.
NASA Astrophysics Data System (ADS)
Huerta, F. V.; Granados, I.; Aguirre, J.; Carrera, R. Á.
2017-12-01
Nowadays, in hydrocarbon industry, there is a need to optimize and reduce exploration costs in the different types of reservoirs, motivating the community specialized in the search and development of alternative exploration geophysical methods. This study show the reflection response obtained from a shale gas / oil deposit through the method of seismic interferometry of ambient vibrations in combination with Wavelet analysis and conventional seismic reflection techniques (CMP & NMO). The method is to generate seismic responses from virtual sources through the process of cross-correlation of records of Ambient Seismic Vibrations (ASV), collected in different receivers. The seismic response obtained is interpreted as the response that would be measured in one of the receivers considering a virtual source in the other. The acquisition of ASV records was performed in northern of Mexico through semi-rectangular arrays of multi-component geophones with instrumental response of 10 Hz. The in-line distance between geophones was 40 m while in cross-line was 280 m, the sampling used during the data collection was 2 ms and the total duration of the records was 6 hours. The results show the reflection response of two lines in the in-line direction and two in the cross-line direction for which the continuity of coherent events have been identified and interpreted as reflectors. There is certainty that the events identified correspond to reflections because the time-frequency analysis performed with the Wavelet Transform has allowed to identify the frequency band in which there are body waves. On the other hand, the CMP and NMO techniques have allowed to emphasize and correct the reflection response obtained during the correlation processes in the frequency band of interest. The results of the processing and analysis of ASV records through the seismic interferometry method have allowed us to see interesting results in light of the cross-correlation process in combination with the Wavelet analysis and conventional seismic reflection techniques. Therefore it was possible to recover the seismic response on each analyzed source-receiver pair, allowing us to obtain the reflection response of each analyzed seismic line.
Stockton, S.L.; Balch, Alfred H.
1978-01-01
The Salt Valley anticline, in the Paradox Basin of southeastern Utah, is under investigation for use as a location for storage of solid nuclear waste. Delineation of thin, nonsalt interbeds within the upper reaches of the salt body is extremely important because the nature and character of any such fluid- or gas-saturated horizons would be critical to the mode of emplacement of wastes into the structure. Analysis of 50 km of conventional seismic-reflection data, in the vicinity of the anticline, indicates that mapping of thin beds at shallow depths may well be possible using a specially designed adaptation of state-of-the-art seismic oil-exploration procedures. Computer ray-trace modeling of thin beds in salt reveals that the frequency and spatial resolution required to map the details of interbeds at shallow depths (less than 750 m) may be on the order of 500 Hz, with surface-spread lengths of less than 350 m. Consideration should be given to the burial of sources and receivers in order to attenuate surface noise and to record the desired high frequencies. Correlation of the seismic-reflection data with available well data and surface geology reveals the complex, structurally initiated diapir, whose upward flow was maintained by rapid contemporaneous deposition of continental clastic sediments on its flanks. Severe collapse faulting near the crests of these structures has distorted the seismic response. Evidence exists, however, that intrasalt thin beds of anhydrite, dolomite, and black shale are mappable on seismic record sections either as short, discontinuous reflected events or as amplitude anomalies that result from focusing of the reflected seismic energy by the thin beds; computer modeling of the folded interbeds confirms both of these as possible causes of seismic response from within the salt diapir. Prediction of the seismic signatures of the interbeds can be made from computer-model studies. Petroleum seismic-reflection data are unsatisfactory for mapping the thin beds because of the lack of sufficient resolution to provide direct evidence of the presence of the thin beds. However, indirect evidence, present in these data as discontinuous seismic events, suggests that two geophysical techniques designed for this specific problem would allow direct detection of the interbeds in salt. These techniques are vertical seismic profiling and shallow, short-offset, high-frequency, seismic-reflection recording.
Visualization of volumetric seismic data
NASA Astrophysics Data System (ADS)
Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk
2015-04-01
Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.
INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO
DOT National Transportation Integrated Search
2018-01-15
This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...
Continuous micro-earthquake catalogue of the central Southern Alps, New Zealand
NASA Astrophysics Data System (ADS)
Michailos, Konstantinos; Townend, John; Savage, Martha; Chamberlain, Calum
2017-04-01
The Alpine Fault is one of the most prominent tectonic features in the South Island, New Zealand, and is inferred to be late in its seismic cycle of M 8 earthquakes based on paleoseismological evidence. Despite this, the Alpine Fault displays low levels of contemporary seismic activity, with little documented on-fault seismicity. This low magnitude seismicity, often below the completeness level of the GeoNet national seismic catalogue, may inform us of changes in fault character along-strike and might be used for rupture simulations and hazard planning. Thus, compiling a micro-earthquake catalogue for the Southern Alps prior to an expected major earthquake is of great interest. Areas of low seismic activity, like the central part of the Alpine Fault, require data recorded over a long duration to reveal temporal and spatial seismicity patterns and provide a better understanding for the processes controlling seismogenesis. The continuity and density of the Southern Alps Microearthquake Borehole Array (SAMBA; deployed in late 2008) allows us to study seismicity in the Southern Alps over a more extended time period than has ever been done previously. Furthermore, by using data from other temporary networks (e.g. WIZARD, ALFA08, DFDP-10) we are able to extend the region covered. To generate a spatially and temporally continuous catalogue of seismicity in New Zealand's central Southern Alps, we used automatic detection and phase-picking methods. We used an automatic phase-picking method for both P- and S- wave arrivals (kPick; Rawles and Thurber, 2015). Using almost 8 years of seismic data we calculated about 9,000 preliminary earthquake. The seismicity is clustered and scattered and a previously observed seismic gap between the Wanganui and Whataroa rivers is also identified.
NASA Astrophysics Data System (ADS)
Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter
2015-04-01
Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.
Signal Quality and the Reliability of Seismic Observations
NASA Astrophysics Data System (ADS)
Zeiler, C. P.; Velasco, A. A.; Pingitore, N. E.
2009-12-01
The ability to detect, time and measure seismic phases depends on the location, size, and quality of the recorded signals. Additional constraints are an analyst’s familiarity with a seismogenic zone and with the seismic stations that record the energy. Quantification and qualification of an analyst’s ability to detect, time and measure seismic signals has not been calculated or fully assessed. The fundamental measurement for computing the accuracy of a seismic measurement is the signal quality. Several methods have been proposed to measure signal quality; however, the signal-to-noise ratio (SNR) has been adopted as a short-term average over the long-term average. While the standard SNR is an easy and computationally inexpensive term, the overall statistical significance has not been computed for seismic measurement analysis. The prospect of canonizing the process of cataloging seismic arrivals hinges on the ability to repeat measurements made by different methods and analysts. The first step in canonizing phase measurements has been done by the IASPEI, which established a reference for accepted practices in naming seismic phases. The New Manual for Seismological Observatory Practices (NMSOP, 2002) outlines key observations for seismic phases recorded at different distances and proposes to quantify timing uncertainty with a user-specified windowing technique. However, this added measurement would not completely remove bias introduced by different techniques used by analysts to time seismic arrivals. The general guideline to time a seismic arrival is to record the time where a noted change in frequency and/or amplitude begins. This is generally achieved by enhancing the arrivals through filtering or beam forming. However, these enhancements can alter the characteristics of the arrival and how the arrival will be measured. Furthermore, each enhancement has user-specified parameters that can vary between analysts and this results in reduced ability to repeat measurements between analysts. The SPEAR project (Zeiler and Velasco, 2009) has started to explore the effects of comparing measurements from the same seismograms. Initial results showed that experience and the signal quality are the leading contributors to pick differences. However, the traditional SNR method of measuring signal quality was replaced by a Wide-band Spectral Ratio (WSR) due to a decrease in scatter. This observation brings up an important question of what is the best way to measure signal quality. We compare various methods (traditional SNR, WSR, power spectral density plots, Allan Variance) that have been proposed to measure signal quality and discuss which method provides the best tool to compare arrival time uncertainty.
Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.
2016-04-01
Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition, the flexibility of NDSHA allows for generation of ground shaking maps at specified long-term return times, which may permit a straightforward comparison between NDSHA and PSHA maps in terms of average rates of exceedance for specified time windows. The comparison of NDSHA and PSHA maps, particularly for very long recurrence times, may indicate to what extent probabilistic ground shaking estimates are consistent with those from physical models of seismic waves propagation. A systematic comparison over the territory of Italy is carried out exploiting the uniqueness of the Italian earthquake catalogue, a data set covering more than a millennium (a time interval about ten times longer than that available in most of the regions worldwide) with a satisfactory completeness level for M>5, which warrants the results of analysis. By analysing in some detail seismicity in the Vrancea region, we show that well constrained macroseismic field information for individual earthquakes may provide useful information about the reliability of ground shaking estimates. Finally, in order to generalise observations, the comparative analysis is extended to further regions where both standard NDSHA and PSHA maps are available (e.g. State of Gujarat, India). The final Global Seismic Hazard Assessment Program (GSHAP) results and the most recent version of Seismic Hazard Harmonization in Europe (SHARE) project maps, along with other national scale probabilistic maps, all obtained by PSHA, are considered for this comparative analysis.
Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion
NASA Astrophysics Data System (ADS)
Tiezhao, B.; Ning, J.; Jianwei, M.
2017-12-01
Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.
NASA Astrophysics Data System (ADS)
Fortin, Will F. J.
The utility and meaning of a geophysical dataset is dependent on good interpretation informed by high-quality data, processing, and attribute examination via technical methodologies. Active source marine seismic reflection data contains a great deal of information in the location, phase, and amplitude of both pre- and post-stack seismic reflections. Using pre- and post-stack data, this work has extracted useful information from marine reflection seismic data in novel ways in both the oceanic water column and the sub-seafloor geology. In chapter 1 we develop a new method for estimating oceanic turbulence from a seismic image. This method is tested on synthetic seismic data to show the method's ability to accurately recover both distribution and levels of turbulent diffusivity. Then we apply the method to real data offshore Costa Rica where we observe lee waves. Our results find elevated diffusivities near the seafloor as well as above the lee waves five times greater than surrounding waters and 50 times greater than open ocean diffusivities. Chapter 2 investigates subsurface geology in the Cascadia Subduction Zone and outlines a workflow for using pre-stack waveform inversion to produce highly detailed velocity models and seismic images. Using a newly developed inversion code, we achieve better imaging results as compared to the product of a standard, user-intensive method for building a velocity model. Our results image the subduction interface ~30 km farther landward than previous work and better images faults and sedimentary structures above the oceanic plate as well as in the accretionary prism. The resultant velocity model is highly detailed, inverted every 6.25 m with ~20 m vertical resolution, and will be used to examine the role of fluids in the subduction system. These results help us to better understand the natural hazards risks associated with the Cascadia Subduction Zone. Chapter 3 returns to seismic oceanography and examines the dynamics of nonlinear internal wave pulses in the South China Sea. Coupling observations from the seismic images with turbulent patterns, we find no evidence for hydraulic jumps in the Luzon passage. Our data suggests geometric resonance may be the underlying physics behind large amplitude nonlinear internal wave pulses seen in the region. We find increased levels of turbulent diffusivity in deep water below 1000 m, associated with internal tide pulses, and near the steep slopes of both the Heng-Chun and Lan-Yu ridges.
A Hammer-Impact, Aluminum, Shear-Wave Seismic Source
Haines, Seth
2007-01-01
Near-surface seismic surveys often employ hammer impacts to create seismic energy. Shear-wave surveys using horizontally polarized waves require horizontal hammer impacts against a rigid object (the source) that is coupled to the ground surface. I have designed, built, and tested a source made out of aluminum and equipped with spikes to improve coupling. The source is effective in a variety of settings, and it is relatively simple and inexpensive to build.
NASA Astrophysics Data System (ADS)
Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe
2017-04-01
In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.
Estimation of bedrock depth using the horizontal‐to‐vertical (H/V) ambient‐noise seismic method
Lane, John W.; White, Eric A.; Steele, Gregory V.; Cannia, James C.
2008-01-01
Estimating sediment thickness and the geometry of the bedrock surface is a key component of many hydrogeologic studies. The horizontal‐to‐vertical (H/V) ambient‐noise seismic method is a novel, non‐invasive technique that can be used to rapidly estimate the depth to bedrock. The H/V method uses a single, broad‐band three‐component seismometer to record ambient seismic noise. The ratio of the averaged horizontal‐to‐vertical frequency spectrum is used to determine the fundamental site resonance frequency, which can be interpreted using regression equations to estimate sediment thickness and depth to bedrock. The U.S. Geological Survey used the H/V seismic method during fall 2007 at 11 sites in Cape Cod, Massachusetts, and 13 sites in eastern Nebraska. In Cape Cod, H/V measurements were acquired along a 60‐kilometer (km) transect between Chatham and Provincetown, where glacial sediments overlie metamorphic rock. In Nebraska, H/V measurements were acquired along approximately 11‐ and 14‐km transects near Firth and Oakland, respectively, where glacial sediments overlie weathered sedimentary rock. The ambient‐noise seismic data from Cape Cod produced clear, easily identified resonance frequency peaks. The interpreted depth and geometry of the bedrock surface correlate well with boring data and previously published seismic refraction surveys. Conversely, the ambient‐noise seismic data from eastern Nebraska produced subtle resonance frequency peaks, and correlation of the interpreted bedrock surface with bedrock depths from borings is poor, which may indicate a low acoustic impedance contrast between the weathered sedimentary rock and overlying sediments and/or the effect of wind noise on the seismic records. Our results indicate the H/V ambient‐noise seismic method can be used effectively to estimate the depth to rock where there is a significant acoustic impedance contrast between the sediments and underlying rock. However, effective use of the method is challenging in the presence of gradational contacts such as gradational weathering or cementation. Further work is needed to optimize interpretation of resonance frequencies in the presence of extreme wind noise. In addition, local estimates of bedrock depth likely could be improved through development of regional or study‐area‐specific regression equations relating resonance frequency to bedrock depth.
Ivanov, Julian M.; Johnson, Carole D.; Lane, John W.; Miller, Richard D.; Clemens, Drew
2009-01-01
A limited seismic investigation of Ball Mountain Dam, an earthen dam near Jamaica, Vermont, was conducted using multiple seismic methods including multi‐channel analysis of surface waves (MASW), refraction tomography, and vertical seismic profiling (VSP). The refraction and MASW data were efficiently collected in one survey using a towed land streamer containing vertical‐displacement geophones and two seismic sources, a 9‐kg hammer at the beginning of the spread and a 40‐kg accelerated weight drop one spread length from the geophones, to obtain near‐ and far‐offset data sets. The quality of the seismic data for the purposes of both refraction and MASW analyses was good for near offsets, decreasing in quality at farther offsets, thus limiting the depth of investigation to about 12 m. Refraction tomography and MASW analyses provided 2D compressional (Vp) and shear‐wave (Vs) velocity sections along the dam crest and access road, which are consistent with the corresponding VSP seismic velocity estimates from nearby wells. The velocity sections helped identify zonal variations in both Vp and Vs (rigidity) properties, indicative of material heterogeneity or dynamic processes (e.g. differential settlement) at specific areas of the dam. The results indicate that refraction tomography and MASW methods are tools with significant potential for economical, non‐invasive characterization of construction materials at earthen dam sites.
Method for using global optimization to the estimation of surface-consistent residual statics
Reister, David B.; Barhen, Jacob; Oblow, Edward M.
2001-01-01
An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.
Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry
NASA Astrophysics Data System (ADS)
Nishitsuji, Yohei; Rowe, C. A.; Wapenaar, Kees; Draganov, Deyan
2016-04-01
The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection responses (and thus images) beneath the Apollo stations on the nearside of the Moon from virtual sources colocated with the stations. This method is called deep-moonquake seismic interferometry (DMSI). Our results show a laterally coherent acoustic boundary around 50 km depth beneath all four Apollo stations. We interpret this boundary as the lunar seismic Moho. This depth agrees with Japan Aerospace Exploration Agency's (JAXA) SELenological and Engineering Explorer (SELENE) result and previous travel time analysis at the Apollo 12/14 sites. The deeper part of the image we obtain from DMSI shows laterally incoherent structures. Such lateral inhomogeneity we interpret as representing a zone characterized by strong scattering and constant apparent seismic velocity at our resolution scale (0.2-2.0 Hz).
Seismic, satellite, and site observations of internal solitary waves in the NE South China Sea
Tang, Qunshu; Wang, Caixia; Wang, Dongxiao; Pawlowicz, Rich
2014-01-01
Internal solitary waves (ISWs) in the NE South China Sea (SCS) are tidally generated at the Luzon Strait. Their propagation, evolution, and dissipation processes involve numerous issues still poorly understood. Here, a novel method of seismic oceanography capable of capturing oceanic finescale structures is used to study ISWs in the slope region of the NE SCS. Near-simultaneous observations of two ISWs were acquired using seismic and satellite imaging, and water column measurements. The vertical and horizontal length scales of the seismic observed ISWs are around 50 m and 1–2 km, respectively. Wave phase speeds calculated from seismic observations, satellite images, and water column data are consistent with each other. Observed waveforms and vertical velocities also correspond well with those estimated using KdV theory. These results suggest that the seismic method, a new option to oceanographers, can be further applied to resolve other important issues related to ISWs. PMID:24948180
NASA Astrophysics Data System (ADS)
Lv, Dongwei; Zhang, Jian; Yu, Xinhai
2018-05-01
In this paper, a fluid-structure interaction dynamic simulation method of spring-loaded pressure relief valve was established. The dynamic performances of the fluid regions and the stress and strain of the structure regions were calculated at the same time by accurately setting up the contact pairs between the solid parts and the coupling surfaces between the fluid regions and the structure regions. A two way fluid-structure interaction dynamic simulation of a simplified pressure relief valve model was carried out. The influence of vertical sinusoidal seismic waves on the performance of the pressure relief valve was preliminarily investigated by loading sine waves. Under vertical seismic waves, the pressure relief valve will flutter, and the reseating pressure was affected by the amplitude and frequency of the seismic waves. This simulation method of the pressure relief valve under vertical seismic waves can provide effective means for investigating the seismic performances of the valves, and make up for the shortcomings of the experiment.
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
The SISIFO project: Seismic Safety at High Schools
NASA Astrophysics Data System (ADS)
Peruzza, Laura; Barnaba, Carla; Bragato, Pier Luigi; Dusi, Alberto; Grimaz, Stefano; Malisan, Petra; Saraò, Angela; Mucciarelli, Marco
2014-05-01
For many years, the Italian scientific community has faced the problem of the reduction of earthquake risk using innovative educational techniques. Recent earthquakes in Italy and around the world have clearly demonstrated that seismic codes alone are not able to guarantee an effective mitigation of risk. After the tragic events of San Giuliano di Puglia (2002), where an earthquake killed 26 school children, special attention was paid in Italy to the seismic safety of schools, but mainly with respect to structural aspects. Little attention has been devoted to the possible and even significant damage to non-structural elements (collapse of ceilings, tipping of cabinets and shelving, obstruction of escape routes, etc..). Students and teachers trained on these aspects may lead to a very effective preventive vigilance. Since 2002, the project EDURISK (www.edurisk.it) proposed educational tools and training programs for schools, at primary and middle levels. More recently, a nationwide campaign aimed to adults (www.iononrischio.it) was launched with the extensive support of civil protection volounteers. There was a gap for high schools, and Project SISIFO was designed to fill this void and in particular for those schools with technical/scientific curricula. SISIFO (https://sites.google.com/site/ogssisifo/) is a multidisciplinary initiative, aimed at the diffusion of scientific culture for achieving seismic safety in schools, replicable and can be structured in training the next several years. The students, helped by their teachers and by experts from scientific institutions, followed a course on specialized training on earthquake safety. The trial began in North-East Italy, with a combination of hands-on activities for the measurement of earthquakes with low-cost instruments and lectures with experts in various disciplines, accompanied by specifically designed teaching materials, both on paper and digital format. We intend to raise teachers and students knowledge of the problems of seismic hazard, seismic response of foundation soils, and building dynamics to stimulate awareness of seismic safety, including seismic hazard, seismic site response, seismic behaviour of structural and non-structural elements and functional issues (escape ways, emergency systems, etc.). The awareness of seismic safety in places of study, work and life aims at improving the capacity to recognize safety issues and possible solutions
Seismic risk perception in Italy
NASA Astrophysics Data System (ADS)
Crescimbene, Massimo; La Longa, Federica; Camassi, Romano; Pino, Nicola Alessandro; Peruzza, Laura
2014-05-01
Risk perception is a fundamental element in the definition and the adoption of preventive counter-measures. In order to develop effective information and risk communication strategies, the perception of risks and the influencing factors should be known. This paper presents results of a survey on seismic risk perception in Italy conducted from January 2013 to present . The research design combines a psychometric and a cultural theoretic approach. More than 7,000 on-line tests have been compiled. The data collected show that in Italy seismic risk perception is strongly underestimated; 86 on 100 Italian citizens, living in the most dangerous zone (namely Zone 1), do not have a correct perception of seismic hazard. From these observations we deem that extremely urgent measures are required in Italy to reach an effective way to communicate seismic risk. Finally, the research presents a comparison between groups on seismic risk perception: a group involved in campaigns of information and education on seismic risk and a control group.
Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion
NASA Astrophysics Data System (ADS)
Panza, G. F.; Panza, G. F.; Romanelli, F.
2001-12-01
In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html
A first step to compare geodynamical models and seismic observations of the inner core
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Waszek, L.; Day, E. A.
2016-12-01
Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim to release our codes to broader scientific community, allowing researchers from all disciplines to test their models of inner core growth against seismic observations or create a kinematic model for the evolution of the inner core which matches new geophysical observations.
NASA Astrophysics Data System (ADS)
Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.
2017-12-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years of continuous seismic record by the Alaskan permanent seismic network and Hi-Climb trans-Himalayan seismic network. The processing chain we developed also opens the possibility for a near-real time seismic detection of landslides, in association with remote-sensing automated detection from Sentinel 2 images for example.
Seismic Waves in Rocks with Fluids and Fractures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J G
2006-02-06
Seismic wave propagation through the earth is often strongly affected by the presence of fractures. When these fractures are filled with fluids (oil, gas, water, CO{sub 2}, etc.), the type and state of the fluid (liquid or gas) can make a large difference in the response of the seismic waves. This paper will summarize some early work of the author on methods of deconstructing the effects of fractures, and any fluids within these fractures, on seismic wave propagation as observed in reflection seismic data. Methods to be explored here include Thomsen's anisotropy parameters for wave moveout (since fractures often inducemore » elastic anisotropy), and some very convenient fracture parameters introduced by Sayers and Kachanov that permit a relatively simple deconstruction of the elastic behavior in terms of fracture parameters (whenever this is appropriate).« less
NASA Astrophysics Data System (ADS)
Kwak, Sangmin; Song, Seok Goo; Kim, Geunyoung; Cho, Chang Soo; Shin, Jin Soo
2017-10-01
Using recordings of a mine collapse event (Mw 4.2) in South Korea in January 2015, we demonstrated that the phase and amplitude information of impulse response functions (IRFs) can be effectively retrieved using seismic interferometry. This event is equivalent to a single downward force at shallow depth. Using quantitative metrics, we compared three different seismic interferometry techniques—deconvolution, coherency, and cross correlation—to extract the IRFs between two distant stations with ambient seismic noise data. The azimuthal dependency of the source distribution of the ambient noise was also evaluated. We found that deconvolution is the best method for extracting IRFs from ambient seismic noise within the period band of 2-10 s. The coherency method is also effective if appropriate spectral normalization or whitening schemes are applied during the data processing.
Ray Tracing Methods in Seismic Emission Tomography
NASA Astrophysics Data System (ADS)
Chebotareva, I. Ya.
2018-03-01
Highly efficient approximate ray tracing techniques which can be used in seismic emission tomography and in other methods requiring a large number of raypaths are described. The techniques are applicable for the gradient and plane-layered velocity sections of the medium and for the models with a complicated geometry of contrasting boundaries. The empirical results obtained with the use of the discussed ray tracing technologies and seismic emission tomography results, as well as the results of numerical modeling, are presented.
NASA Astrophysics Data System (ADS)
Hagerty, M. T.; Lomax, A.; Hellman, S. B.; Whitmore, P.; Weinstein, S.; Hirshorn, B. F.; Knight, W. R.
2015-12-01
Tsunami warning centers must rapidly decide whether an earthquake is likely to generate a destructive tsunami in order to issue a tsunami warning quickly after a large event. For very large events (Mw > 8 or so), magnitude and location alone are sufficient to warrant an alert. However, for events of smaller magnitude (e.g., Mw ~ 7.5), particularly for so-called "tsunami earthquakes", magnitude alone is insufficient to issue an alert and other measurements must be rapidly made and used to assess tsunamigenic potential. The Tsunami Information technology Modernization (TIM) is a National Oceanic and Atmospheric Administration (NOAA) project to update and standardize the earthquake and tsunami monitoring systems currently employed at the U.S. Tsunami Warning Centers in Ewa Beach, Hawaii (PTWC) and Palmer, Alaska (NTWC). We (ISTI) are responsible for implementing the seismic monitoring components in this new system, including real-time seismic data collection and seismic processing. The seismic data processor includes a variety of methods aimed at real-time discrimination of tsunamigenic events, including: Mwp, Me, slowness (Theta), W-phase, mantle magnitude (Mm), array processing and finite-fault inversion. In addition, it contains the ability to designate earthquake scenarios and play the resulting synthetic seismograms through the processing system. Thus, it is also a convenient tool that integrates research and monitoring and may be used to calibrate and tune the real-time monitoring system. Here we show results of the automated processing system for a large dataset of subduction zone earthquakes containing recent tsunami earthquakes and we examine the accuracy of the various discrimation methods and discuss issues related to their successful real-time application.
A new method to estimate location and slip of simulated rock failure events
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Galvan, Boris; Miller, Stephen Andrew
2015-05-01
At the laboratory scale, identifying and locating acoustic emissions (AEs) is a common method for short term prediction of failure in geomaterials. Above average AE typically precedes the failure process and is easily measured. At larger scales, increase in micro-seismic activity sometimes precedes large earthquakes (e.g. Tohoku, L'Aquilla, oceanic transforms), and can be used to assess seismic risk. The goal of this work is to develop a methodology and numerical algorithms for extracting a measurable quantity analogous to AE arising from the solution of equations governing rock deformation. Since there is no physical property to quantify AE derivable from the governing equations, an appropriate rock-mechanical analog needs to be found. In this work, we identify a general behavior of the AE generation process preceding rock failure. This behavior includes arbitrary localization of low magnitude events during pre-failure stage, followed by increase in number and amplitude, and finally localization around the incipient failure plane during macroscopic failure. We propose deviatoric strain rate as the numerical analog that mimics this behavior, and develop two different algorithms designed to detect rapid increases in deviatoric strain using moving averages. The numerical model solves a fully poro-elasto-plastic continuum model and is coupled to a two-phase flow model. We test our model by comparing simulation results with experimental data of drained compression and of fluid injection experiments. We find for both cases that occurrence and amplitude of our AE analog mimic the observed general behavior of the AE generation process. Our technique can be extended to modeling at the field scale, possibly providing a mechanistic basis for seismic hazard assessment from seismicity that occasionally precedes large earthquakes.
Finite element analyses for seismic shear wall international standard problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.J.; Hofmayer, C.H.
Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation`s (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structuresmore » used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs.« less
Military applications and examples of near-surface seismic surface wave methods (Invited)
NASA Astrophysics Data System (ADS)
sloan, S.; Stevens, R.
2013-12-01
Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.
Design and Implementation of the National Seismic Monitoring Network in the Kingdom of Bhutan
NASA Astrophysics Data System (ADS)
Ohmi, S.; Inoue, H.; Chophel, J.; Pelgay, P.; Drukpa, D.
2017-12-01
Bhutan-Himalayan district is located along the plate collision zone between Indian and Eurasian plates, which is one of the most seismically active region in the world. Recent earthquakes such as M7.8 Gorkha Nepal earthquake in April 25, 2015 and M6.7 Imphal, India earthquake in January 3, 2016 are examples of felt earthquakes in Bhutan. However, there is no permanent seismic monitoring system ever established in Bhutan, whose territory is in the center of the Bhutan-Himalayan region. We started establishing permanent seismic monitoring network of minimum requirements and intensity meter network over the nation. The former is composed of six (6) observation stations in Bhutan with short period weak motion and strong motion seismometers as well as three (3) broad-band seismometers, and the latter is composed of twenty intensity meters located in every provincial government office. Obtained data are transmitted to the central processing system in the DGM office in Thimphu in real time. In this project, DGM will construct seismic vault with their own budget which is approved as the World Bank project, and Japan team assists the DGM for site survey of observation site, designing the observation vault, and designing the data telemetry system as well as providing instruments for the observation such as seismometers and digitizers. We already started the operation of the six (6) weak motion stations as well as twenty (20) intensity meter stations. Additionally, the RIMES (Regional Integrated Multi-hazard Early Warning System for Africa and Asia) is also providing eight (8) weak motion stations and we are keeping close communication to operate them as one single seismic monitoring network composed of fourteen (14) stations. This network will be definitely utilized for not only for seismic disaster mitigation of the country but also for studying the seismotectonics in the Bhutan-Himalayan region which is not yet precisely revealed due to the lack of observation data in the past.
Development Length for Headed Bars in Slab-Column Joints of RC Slab Bridges
DOT National Transportation Integrated Search
2015-12-04
In accordance with the Caltrans Seismic Design Criteria, the superstructure in a slab bridge should remain essentially elastic and only the pile extensions/columns are permitted to develop inelastic deformations during a seismic event. Hence, the lon...
Synthetic Seismogram Modeling.
1982-11-15
various phases ( designated A, B, C, etc.) are indicated on the seismic record section at the top of the diagram. The observed travel times show a good...structure of the Yellowstone aperture seismic array (LAS), Moatana, U.S. region and experiment design , J. Geophys. Geol. Suwv. Open File Rep. 1671, 1972. Res...also display little For clarity in both typography and conitext, we coherence in waveform or even in the envelope of shall henceforth write -P-bar in
High lateral resolution exploration using surface waves from noise records
NASA Astrophysics Data System (ADS)
Chávez-García, Francisco José Yokoi, Toshiaki
2016-04-01
Determination of the shear-wave velocity structure at shallow depths is a constant necessity in engineering or environmental projects. Given the sensitivity of Rayleigh waves to shear-wave velocity, subsoil structure exploration using surface waves is frequently used. Methods such as the spectral analysis of surface waves (SASW) or multi-channel analysis of surface waves (MASW) determine phase velocity dispersion from surface waves generated by an active source recorded on a line of geophones. Using MASW, it is important that the receiver array be as long as possible to increase the precision at low frequencies. However, this implies that possible lateral variations are discarded. Hayashi and Suzuki (2004) proposed a different way of stacking shot gathers to increase lateral resolution. They combined strategies used in MASW with the common mid-point (CMP) summation currently used in reflection seismology. In their common mid-point with cross-correlation method (CMPCC), they cross-correlate traces sharing CMP locations before determining phase velocity dispersion. Another recent approach to subsoil structure exploration is based on seismic interferometry. It has been shown that cross-correlation of a diffuse field, such as seismic noise, allows the estimation of the Green's Function between two receivers. Thus, a virtual-source seismic section may be constructed from the cross-correlation of seismic noise records obtained in a line of receivers. In this paper, we use the seismic interferometry method to process seismic noise records obtained in seismic refraction lines of 24 geophones, and analyse the results using CMPCC to increase the lateral resolution of the results. Cross-correlation of the noise records allows reconstructing seismic sections with virtual sources at each receiver location. The Rayleigh wave component of the Green's Functions is obtained with a high signal-to-noise ratio. Using CMPCC analysis of the virtual-source seismic lines, we are able to identify lateral variations of phase velocity inside the seismic line, and increase the lateral resolution compared with results of conventional analysis.
Stephenson, William J.; Shedlock, Kaye M.; Odum, Jack K.
1995-01-01
In the winter of 1811-12, three of the largest historic earthquakes in the United States occurred near New Madrid, Missouri. Seismicity continues to the present day throughout a tightly clustered pattern of epicenters centered on the bootheel of Missouri, including parts of northeastern Arkansas, northwestern Tennessee, western Kentucky, and southern Illinois. In 1990, the New Madrid seismic zone/Central United States became the first seismically active region east of the Rocky Mountains to be designated a priority research area within the National Earthquake Hazards Reduction Program (NEHRP). This Professional Paper is a collection of papers, some published separately, presenting results of the newly intensified research program in this area. Major components of this research program include tectonic framework studies, seismicity and deformation monitoring and modeling, improved seismic hazard and risk assessments, and cooperative hazard mitigation studies.
Seismic imaging of post-glacial sediments - test study before Spitsbergen expedition
NASA Astrophysics Data System (ADS)
Szalas, Joanna; Grzyb, Jaroslaw; Majdanski, Mariusz
2017-04-01
This work presents results of the analysis of reflection seismic data acquired from testing area in central Poland. For this experiment we used total number of 147 vertical component seismic stations (DATA-CUBE and Reftek "Texan") with accelerated weight drop (PEG-40). The profile was 350 metres long. It is a part of pilot study for future research project on Spitsbergen. The purpose of the study is to recognise the characteristics of seismic response of post-glacial sediments in order to design the most adequate survey acquisition parameters and processing sequence for data from Spitsbergen. Multiple tests and comparisons have been performed to obtain the best possible quality of seismic image. In this research we examine the influence of receiver interval size, front mute application and surface wave attenuation attempts. Although seismic imaging is the main technique we are planning to support this analysis with additional data from traveltime tomography, MASW and other a priori information.
Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin
2017-12-01
Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
Calibration of Seismic Attributes for Reservoir Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennington, Wayne D.; Acevedo, Horacio; Green, Aaron
2002-01-29
This project has completed the initially scheduled third year of the contract, and is beginning a fourth year, designed to expand upon the tech transfer aspects of the project. From the Stratton data set, demonstrated that an apparent correlation between attributes derived along `phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the Boonsville data set , developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures,more » to subdivide regional facies determined from logs into productive and non-productive subfacies, and developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Teal South data set provided a surprising set of data, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines.« less
Optimizing Seismic Monitoring Networks for EGS and Conventional Geothermal Projects
NASA Astrophysics Data System (ADS)
Kraft, Toni; Herrmann, Marcus; Bethmann, Falko; Stefan, Wiemer
2013-04-01
In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential for the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquakes at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized location problem. Optimization for additional criteria (e.g., focal mechanism determination or installation costs) can be included. We consider a 3D seismic velocity model, an European ambient seismic noise model derived from high-resolution land-use data, and existing seismic stations in the vicinity of the geotechnical site. Additionally, we account for the attenuation of the seismic signal with travel time and ambient seismic noise with depth to be able to correctly deal with borehole station networks. Using this algorithm we are able to find the optimal geometry and size of the seismic monitoring network that meets the predefined application-oriented performance criteria. This talk will focus on optimal network geometries for deep geothermal projects of the EGS and hydrothermal type, and discuss the requirements for basic seismic surveillance and high-resolution reservoir monitoring and characterization.
NASA Astrophysics Data System (ADS)
Sherman, Christopher Scott
Naturally occurring geologic heterogeneity is an important, but often overlooked, aspect of seismic wave propagation. This dissertation presents a strategy for modeling the effects of heterogeneity using a combination of geostatistics and Finite Difference simulation. In the first chapter, I discuss my motivations for studying geologic heterogeneity and seis- mic wave propagation. Models based upon fractal statistics are powerful tools in geophysics for modeling heterogeneity. The important features of these fractal models are illustrated using borehole log data from an oil well and geomorphological observations from a site in Death Valley, California. A large part of the computational work presented in this disserta- tion was completed using the Finite Difference Code E3D. I discuss the Python-based user interface for E3D and the computational strategies for working with heterogeneous models developed over the course of this research. The second chapter explores a phenomenon observed for wave propagation in heteroge- neous media - the generation of unexpected shear wave phases in the near-source region. In spite of their popularity amongst seismic researchers, approximate methods for modeling wave propagation in these media, such as the Born and Rytov methods or Radiative Trans- fer Theory, are incapable of explaining these shear waves. This is primarily due to these method's assumptions regarding the coupling of near-source terms with the heterogeneities and mode conversion. To determine the source of these shear waves, I generate a suite of 3D synthetic heterogeneous fractal geologic models and use E3D to simulate the wave propaga- tion for a vertical point force on the surface of the models. I also present a methodology for calculating the effective source radiation patterns from the models. The numerical results show that, due to a combination of mode conversion and coupling with near-source hetero- geneity, shear wave energy on the order of 10% of the compressional wave energy may be generated within the shear radiation node of the source. Interestingly, in some cases this shear wave may arise as a coherent pulse, which may be used to improve seismic imaging efforts. In the third and fourth chapters, I discuss the results of a numerical analysis and field study of seismic near-surface tunnel detection methods. Detecting unknown tunnels and voids, such as old mine workings or solution cavities in karst terrain, is a challenging prob- lem in geophysics and has implications for geotechnical design, public safety, and domestic security. Over the years, a number of different geophysical methods have been developed to locate these objects (microgravity, resistivity, seismic diffraction, etc.), each with varying results. One of the major challenges facing these methods is understanding the influence of geologic heterogeneity on their results, which makes this problem a natural extension of the modeling work discussed in previous chapters. In the third chapter, I present the results of a numerical study of surface-wave based tunnel detection methods. The results of this analysis show that these methods are capable of detecting a void buried within one wavelength of the surface, with size potentially much less than one wavelength. In addition, seismic surface- wave based detection methods are effective in media with moderate heterogeneity (epsilon < 5 %), and in fact, this heterogeneity may serve to increase the resolution of these methods. In the fourth chapter, I discuss the results of a field study of tunnel detection methods at a site within the Black Diamond Mines Regional Preserve, near Antioch California. I use a com- bination of surface wave backscattering, 1D surface wave attenuation, and 2D attenuation tomography to locate and determine the condition of two tunnels at this site. These results compliment the numerical study in chapter 3 and highlight their usefulness for detecting tunnels at other sites.
Adding seismic broadband analysis to characterize Andean backarc seismicity in Argentina
NASA Astrophysics Data System (ADS)
Alvarado, P.; Giuliano, A.; Beck, S.; Zandt, G.
2007-05-01
Characterization of the highly seismically active Andean backarc is crucial for assessment of earthquake hazards in western Argentina. Moderate-to-large crustal earthquakes have caused several deaths, damage and drastic economic consequences in Argentinean history. We have studied the Andean backarc crust between 30°S and 36°S using seismic broadband data available from a previous ("the CHARGE") IRIS-PASSCAL experiment. We collected more than 12 terabytes of continuous seismic data from 22 broadband instruments deployed across Chile and Argentina during 1.5 years. Using free software we modeled full regional broadband waveforms and obtained seismic moment tensor inversions of crustal earthquakes testing for the best focal depth for each event. We also mapped differences in the Andean backarc crustal structure and found a clear correlation with different types of crustal seismicity (i.e. focal depths, focal mechanisms, magnitudes and frequencies of occurrence) and previously mapped terrane boundaries. We now plan to use the same methodology to study other regions in Argentina using near-real time broadband data available from the national seismic (INPRES) network and global seismic networks operating in the region. We will re-design the national seismic network to optimize short-period and broadband seismic station coverage for different network purposes. This work is an international effort that involves researchers and students from universities and national government agencies with the goal of providing more information about earthquake hazards in western Argentina.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Application of the Radon-FCL approach to seismic random noise suppression and signal preservation
NASA Astrophysics Data System (ADS)
Meng, Fanlei; Li, Yue; Liu, Yanping; Tian, Yanan; Wu, Ning
2016-08-01
The fractal conservation law (FCL) is a linear partial differential equation that is modified by an anti-diffusive term of lower order. The analysis indicated that this algorithm could eliminate high frequencies and preserve or amplify low/medium-frequencies. Thus, this method is quite suitable for the simultaneous noise suppression and enhancement or preservation of seismic signals. However, the conventional FCL filters seismic data only along the time direction, thereby ignoring the spatial coherence between neighbouring traces, which leads to the loss of directional information. Therefore, we consider the development of the conventional FCL into the time-space domain and propose a Radon-FCL approach. We applied a Radon transform to implement the FCL method in this article; performing FCL filtering in the Radon domain achieves a higher level of noise attenuation. Using this method, seismic reflection events can be recovered with the sacrifice of fewer frequency components while effectively attenuating more random noise than conventional FCL filtering. Experiments using both synthetic and common shot point data demonstrate the advantages of the Radon-FCL approach versus the conventional FCL method with regard to both random noise attenuation and seismic signal preservation.
Network Optimization for Induced Seismicity Monitoring in Urban Areas
NASA Astrophysics Data System (ADS)
Kraft, T.; Husen, S.; Wiemer, S.
2012-12-01
With the global challenge to satisfy an increasing demand for energy, geological energy technologies receive growing attention and have been initiated in or close to urban areas in the past several years. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential to the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquake at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized location problem. Optimization for additional criteria (e.g., focal mechanism determination or installation costs) can be included. We consider a 3D seismic velocity model, an European ambient seismic noise model derived from high-resolution land-use data and existing seismic stations in the vicinity of the geotechnical site. Using this algorithm we are able to find the optimal geometry and size of the seismic monitoring network that meets the predefined application-oriented performance criteria. In this talk we will focus on optimal network geometries for deep geothermal projects of the EGS and hydrothermal type. We will discuss the requirements for basic seismic surveillance and high-resolution reservoir monitoring and characterization.
NASA Astrophysics Data System (ADS)
Heincke, B.; Moorkamp, M.; Jegen, M.; Hobbs, R. W.
2012-12-01
Imaging of sub-basalt sediments with reflection seismic techniques is limited due to absorption, scattering and transmission effects and the presence of peg-leg multiples. Although many of the difficulties facing conventional seismic profiles can be overcome by recording long offset data resolution of sub-basalt sediments in seismic sections is typically still largely restricted. Therefore multi-parametric approaches in general and joint inversion strategies in particular (e.g. Colombo et al., 2008, Jordan et al., 2012) are considered as alternative to gain additional information from sub-basalt structures. Here, we combine in a 3-D joint inversion first-arrival time tomography, FTG gravity and MT data to identify the base basalt and resolve potential sediments underneath. For sub-basalt exploration the three methods complement each other such that the null space is reduced and significantly better resolved models can be obtained than would be possible by the individual methods: The seismic data gives a robust model for the supra-basalt sediments whilst the gravity field is dominated by the high density basalt and basement features. The MT on the other hand is sensitive to the conductivity in both the supra- and sub-basalt sediments. We will present preliminary individual and joint inversion result for a FTG, seismic and MT data set located in the Faroe-Shetland basin. Because the investigated area is rather large (~75 x 40 km) and the individual data sets are relatively huge, we use a joint inversion framework (see Moorkamp et al., 2011) which is designed to handle large amount of data/model parameters. This program has moreover the options to link the individual parameter models either petrophysically using fixed parameter relationships or structurally using the cross-gradient approach. The seismic data set consists of a pattern of 8 intersecting wide-angle seismic profiles with maximum offsets of up to ~24 km. The 3-D gravity data set (size :~ 30 x 30 km) is collected along parallel lines by a shipborne gradiometer and the marine MT data set is composed of 41 stations that are distributed over the whole investigation area. Logging results from a borehole located in the central part of the investigation area enable us to derive parameter relationships between seismic velocities, resistivities and densities that are adequately describe the rock property behaviors of both the basaltic lava flows and sedimentary layers in this region. In addition, a 3-D reflection seismic survey covering the central part allows us to incorporate the top of basalt and other features as constraints in the joint inversions and to evaluate the quality of the final results. Literature: D. Colombo, M. Mantovani, S. Hallinan, M. Virgilio, 2008. Sub-basalt depth imaging using simultaneous joint inversion of seismic and electromagnetic (MT) data: a CRB field study. SEG Expanded Abstract, Las Vegas, USA, 2674-2678. M. Jordan, J. Ebbing, M. Brönner, J. Kamm , Z. Du, P. Eliasson, 2012. Joint Inversion for Improved Sub-salt and Sub-basalt Imaging with Application to the More Margin. EAGE Expanded Abstracts, Copenhagen, DK. M. Moorkamp, B. Heincke, M. Jegen, A.W.Roberts, R.W. Hobbs, 2011. A framework for 3-D joint inversion of MT, gravity and seismic refraction data. Geophysical Journal International, 184, 477-493.
Study on the effect of the infill walls on the seismic performance of a reinforced concrete frame
NASA Astrophysics Data System (ADS)
Zhang, Cuiqiang; Zhou, Ying; Zhou, Deyuan; Lu, Xilin
2011-12-01
Motivated by the seismic damage observed to reinforced concrete (RC) frame structures during the Wenchuan earthquake, the effect of infill walls on the seismic performance of a RC frame is studied in this paper. Infill walls, especially those made of masonry, offer some amount of stiffness and strength. Therefore, the effect of infill walls should be considered during the design of RC frames. In this study, an analysis of the recorded ground motion in the Wenchuan earthquake is performed. Then, a numerical model is developed to simulate the infill walls. Finally, nonlinear dynamic analysis is carried out on a RC frame with and without infill walls, respectively, by using CANNY software. Through a comparative analysis, the following conclusions can be drawn. The failure mode of the frame with infill walls is in accordance with the seismic damage failure pattern, which is strong beam and weak column mode. This indicates that the infill walls change the failure pattern of the frame, and it is necessary to consider them in the seismic design of the RC frame. The numerical model presented in this paper can effectively simulate the effect of infill walls on the RC frame.
NASA Astrophysics Data System (ADS)
Zhou, Fulin; Tan, Ping
2018-01-01
China is a country where 100% of the territory is located in a seismic zone. Most of the strong earthquakes are over prediction. Most fatalities are caused by structural collapse. Earthquakes not only cause severe damage to structures, but can also damage non-structural elements on and inside of facilities. This can halt city life, and disrupt hospitals, airports, bridges, power plants, and other infrastructure. Designers need to use new techniques to protect structures and facilities inside. Isolation, energy dissipation and, control systems are more and more widely used in recent years in China. Currently, there are nearly 6,500 structures with isolation and about 3,000 structures with passive energy dissipation or hybrid control in China. The mitigation techniques are applied to structures like residential buildings, large or complex structures, bridges, underwater tunnels, historical or cultural relic sites, and industrial facilities, and are used for retrofitting of existed structures. This paper introduces design rules and some new and innovative devices for seismic isolation, energy dissipation and hybrid control for civil and industrial structures. This paper also discusses the development trends for seismic resistance, seismic isolation, passive and active control techniques for the future in China and in the world.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the Pmore » as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the feature identification methods of Chapter 3, the compression methods of Chapter 4, as well as the wavelet design methods of Chapter 5, are general enough to be easily applied to other transient signals.« less
A FORTRAN program for calculating nonlinear seismic ground response
Joyner, William B.
1977-01-01
The program described here was designed for calculating the nonlinear seismic response of a system of horizontal soil layers underlain by a semi-infinite elastic medium representing bedrock. Excitation is a vertically incident shear wave in the underlying medium. The nonlinear hysteretic behavior of the soil is represented by a model consisting of simple linear springs and Coulomb friction elements arranged as shown. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. A brief program description is provided here with instructions for preparing the input and a source listing. A more detailed discussion of the method is presented elsewhere as is the description of a different program employing implicit integration.
NASA Astrophysics Data System (ADS)
Solakov, Dimcho; Dimitrova, Liliya; Simeonova, Stela; Aleksandrova, Irena; Stoyanov, Stoyan; Metodiev, Metodi
2013-04-01
The prevention of the natural disasters and the performing management of reactions to crisis are common problems for many countries. The Romania-Bulgaria border region is significantly affected by earthquakes occurred in both territories: on the one-hand, Vrancea seismic source, with intermediate-depth events and on the other hand, crustal seismicity recorded in the northern part of Bulgaria (Shabla, Dulovo, Gorna Orjahovitza). The general objective of DACEA (2010-2013) project is to develop an system of earthquake alert in order to prevent the natural disasters caused by earthquakes in the cross-border area, taking into account the nuclear power plants and other chemical plants located along the Danube on the territories of Romania and Bulgaria. An integrated warning system is designed and implemented in the cross-border area. A seismic detection network is put in operation in order to warn the bodies in charge with emergency situations management in case of seismic danger. The main purpose of this network is: • monitoring of the four seismogenic areas relevant for the cross-border area, in order to detect dangerous earthquakes • sending the seismic warning signals within several seconds to the local public authorities in the cross-border area On the territory of Bulgaria the seismic network belonging to SEA is consists of: • 8 seismic stations equipped with Basalt digitizer, accelerometer Epi-sensor and BB seismometer KS2000. • 8 seismic stations equipped with Basalt digitizer, accelerometer Epi-sensor, warning and visual monitoring equipment. The stations are spanned allover the North Bulgaria. The sites were thoroughly examined and the most important requirement was the low level of noise or vibrations. SEA centers were established both in Sofia (in National Institute of Geophysics, Geodesy and Geography - NIGGG) and Bucharest (in National Institute of Research and Development for Earth Physics). Both centers are equipped with servers for data analyses and storage. Specialized software for elaboration of scenarios of seismic hazard is designed and implemented. The reaction of buildings, roads, bridges, land etc. to earthquakes is graphically shown on the monitor. The high risk areas are highlighted in order for the emergency units to be prepared for intervention. This software is designed on the base of a comprehensive relational data base of historical and contemporary seismicity in the cross-border region. The output shake maps and scenarios are to be used by the emergency intervention units, local public authorities and for general public awareness.
ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location
NASA Astrophysics Data System (ADS)
Denolle, M.; Perol, T.; Gharbi, M.
2017-12-01
Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.
Seismic refraction analysis: the path forward
Haines, Seth S.; Zelt, Colin; Doll, William
2012-01-01
Seismic Refraction Methods: Unleashing the Potential and Understanding the Limitations; Tucson, Arizona, 29 March 2012 A workshop focused on seismic refraction methods took place on 29 May 2012, associated with the 2012 Symposium on the Application of Geophysics to Engineering and Environmental Problems. This workshop was convened to assess the current state of the science and discuss paths forward, with a primary focus on near-surface problems but with an eye on all applications. The agenda included talks on these topics from a number of experts interspersed with discussion and a dedicated discussion period to finish the day. Discussion proved lively at times, and workshop participants delved into many topics central to seismic refraction work.
Determination of rheological parameters of pile foundations for bridges for earthquake analysis
DOT National Transportation Integrated Search
1997-07-01
In the seismic design criteria for highway bridges, there is a significant lack of guidance on ways to incorporate the effect of soil-structure interaction in determining seismic response. For this study, a simple analytical model for pile and pile g...
NASA Astrophysics Data System (ADS)
Berge-Thierry, C.; Hollender, F.; Guyonnet-Benaize, C.; Baumont, D.; Ameri, G.; Bollinger, L.
2017-09-01
Seismic analysis in the context of nuclear safety in France is currently guided by a pure deterministic approach based on Basic Safety Rule ( Règle Fondamentale de Sûreté) RFS 2001-01 for seismic hazard assessment, and on the ASN/2/01 Guide that provides design rules for nuclear civil engineering structures. After the 2011 Tohohu earthquake, nuclear operators worldwide were asked to estimate the ability of their facilities to sustain extreme seismic loads. The French licensees then defined the `hard core seismic levels', which are higher than those considered for design or re-assessment of the safety of a facility. These were initially established on a deterministic basis, and they have been finally justified through state-of-the-art probabilistic seismic hazard assessments. The appreciation and propagation of uncertainties when assessing seismic hazard in France have changed considerably over the past 15 years. This evolution provided the motivation for the present article, the objectives of which are threefold: (1) to provide a description of the current practices in France to assess seismic hazard in terms of nuclear safety; (2) to discuss and highlight the sources of uncertainties and their treatment; and (3) to use a specific case study to illustrate how extended source modeling can help to constrain the key assumptions or parameters that impact upon seismic hazard assessment. This article discusses in particular seismic source characterization, strong ground motion prediction, and maximal magnitude constraints, according to the practice of the French Atomic Energy Commission. Due to increases in strong motion databases in terms of the number and quality of the records in their metadata and the uncertainty characterization, several recently published empirical ground motion prediction models are eligible for seismic hazard assessment in France. We show that propagation of epistemic and aleatory uncertainties is feasible in a deterministic approach, as in a probabilistic way. Assessment of seismic hazard in France in the framework of the safety of nuclear facilities should consider these recent advances. In this sense, the opening of discussions with all of the stakeholders in France to update the reference documents (i.e., RFS 2001-01; ASN/2/01 Guide) appears appropriate in the short term.
2011-09-01
No. BAA09-69 ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km...NNSA). 14. ABSTRACT Using multiple deployments of an 80-element, three-component borehole seismic array stretching from the surface to 2.3 km depth...generated using the direct Green’s function (DGF) method of Friederich and Dalkolmo (1995). This method synthesizes the seismic wavefield for a spherically
Numerical simulation of bubble plumes and an analysis of their seismic attributes
NASA Astrophysics Data System (ADS)
Li, Canping; Gou, Limin; You, Jiachun
2017-04-01
To study the bubble plume's seismic response characteristics, the model of a plume water body has been built in this article using the bubble-contained medium acoustic velocity model and the stochastic medium theory based on an analysis of both the acoustic characteristics of a bubble-contained water body and the actual features of a plume. The finite difference method is used for forward modelling, and the single-shot seismic record exhibits the characteristics of a scattered wave field generated by a plume. A meaningful conclusion is obtained by extracting seismic attributes from the pre-stack shot gather record of a plume. The values of the amplitude-related seismic attributes increase greatly as the bubble content goes up, and changes in bubble radius will not cause seismic attributes to change, which is primarily observed because the bubble content has a strong impact on the plume's acoustic velocity, while the bubble radius has a weak impact on the acoustic velocity. The above conclusion provides a theoretical reference for identifying hydrate plumes using seismic methods and contributes to further study on hydrate decomposition and migration, as well as on distribution of the methane bubble in seawater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnison, Shaughn; Livers-Douglas, Amanda; Barajas-Olalde, Cesar
The scalable, automated, semipermanent seismic array (SASSA) project led and managed by the Energy & Environmental Research Center (EERC) was designed as a 3-year proof-of-concept study to evaluate and demonstrate an innovative application of the seismic method. The concept was to use a sparse surface array of 96 nodal seismic sensors paired with a single, remotely operated active seismic source at a fixed location to monitor for CO 2 saturation changes in a subsurface reservoir by processing the data for time-lapse changes at individual, strategically chosen reservoir reflection points. The combination of autonomous equipment and modern processing algorithms was usedmore » to apply the seismic method in a manner different from the normal paradigm of collecting a spatially dense data set to produce an image. It was used instead to monitor individual, strategically chosen reservoir reflection points for detectable signal character changes that could be attributed to the passing of a CO 2 saturation front or, possibly, changes in reservoir pressure. Data collection occurred over the course of 1 year at an oil field undergoing CO 2 injection for enhanced oil recovery (EOR) and focused on four overlapping “five-spot” EOR injector–producer patterns. Selection, procurement, configuration, installation, and testing of project equipment and collection of five baseline data sets were completed in advance of CO 2 injection within the study area. Weekly remote data collection produced 41 incremental time-lapse records for each of the 96 nodes. Validation was provided by two methods: 1) a conventional 2-D seismic line acquired through the center of the study area before injection started and again after the project ended and processed in a time-lapse manner and 2) by CO 2 saturation maps created from reservoir simulations based on injection and production history matching. Interpreted results were encouraging but mixed, with indications of changes likely due to the presence of CO 2 on some node reflection points where and when effects would be expected and noneffects where no CO 2 was expected, while results at some locations where simulation outputs suggested CO 2 should be present were ambiguous. Acquisition noise impacted interpretation of data at several locations. Many lessons learned were generated by the study to inform and improve results on a follow-up study. The ultimate aim of the project was to evaluate whether deployment of a SASSA technology can provide a useful and cost-effective monitoring solution for future CO 2 injection projects. The answer appears to be affirmative, with the expectation that lessons learned applied to future iterations, together with technology advances, will likely result in significant improvements.« less
Seismic while drilling: Operational experiences in Viet Nam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, M.; Einchcomb, C.
1997-03-01
The BP/Statoil alliance in Viet Nam has used seismic while drilling on four wells during the last two years. Three wells employed the Western Atlas Tomex system, and the last well, Schlumberger`s SWD system. Perceived value of seismic while drilling (SWD) lies in being able to supply real-time data linking drill bit position to a seismic picture of the well. However, once confidence in equipment and methodology is attained, SWD can influence well design and planning associated with drilling wells. More important, SWD can remove uncertainty when actually drilling wells, allowing risk assessment to be carried out more accurately andmore » confidently.« less
Seismic Behavior and Retrofit of Concrete Columns of Old R.C. Buildings Reinforced With Plain Bars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marefat, M. S.; Arani, K. Karbasi; Shirazi, S. M. Hassanzadeh
2008-07-08
Seismic rehabilitation of old buildings has been a major challenge in recent years. The first step in seismic rehabilitation is evaluation of the existing capacity and the seismic behaviour. For investigation of the seismic behaviour of RC members of a real old building in Iran which has been designed and constructed by European engineers in 1940, three half-scale column specimens reinforced with plain bars have been tested. The tests indicate significant differences between the responses of specimens reinforced by plain bars relative to those reinforced by deformed bars. A regular pattern of cracking and a relatively brittle behaviour was observedmore » while a relatively large residual strength appeared after sudden drop of initial strength and stiffness due to slip of longitudinal bars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report develops and applies a methodology for estimating strong earthquake ground motion. The motivation was to develop a much needed tool for use in developing the seismic requirements for structural designs. An earthquake`s ground motion is a function of the earthquake`s magnitude, and the physical properties of the earth through which the seismic waves travel from the earthquake fault to the site of interest. The emphasis of this study is on ground motion estimation in Eastern North America (east of the Rocky Mountains), with particular emphasis on the Eastern United States and southeastern Canada. Eastern North America is amore » stable continental region, having sparse earthquake activity with rare occurrences of large earthquakes. While large earthquakes are of interest for assessing seismic hazard, little data exists from the region to empirically quantify their effects. The focus of the report is on the attributes of ground motion in Eastern North America that are of interest for the design of facilities such as nuclear power plants. This document, Volume II, contains Appendices 2, 3, 5, 6, and 7 covering the following topics: Eastern North American Empirical Ground Motion Data; Examination of Variance of Seismographic Network Data; Soil Amplification and Vertical-to-Horizontal Ratios from Analysis of Strong Motion Data From Active Tectonic Regions; Revision and Calibration of Ou and Herrmann Method; Generalized Ray Procedure for Modeling Ground Motion Attenuation; Crustal Models for Velocity Regionalization; Depth Distribution Models; Development of Generic Site Effects Model; Validation and Comparison of One-Dimensional Site Response Methodologies; Plots of Amplification Factors; Assessment of Coupling Between Vertical & Horizontal Motions in Nonlinear Site Response Analysis; and Modeling of Dynamic Soil Properties.« less
NASA Astrophysics Data System (ADS)
Yamada, Ryuhei; Nébut, Tanguy; Shiraishi, Hiroaki; Lognonné, Philippe; Kobayashi, Naoki; Tanaka, Satoshi
2015-07-01
Seismic data obtained over a broad frequency range are very useful in investigation of the internal structures of the Earth and other planetary bodies. However, planetary seismic data acquired through the NASA Apollo and Viking programs were obtained only over a very limited frequency range. To obtain effective seismic data over a broader frequency range on planetary surfaces, broadband seismometers suitable for planetary seismology must be developed. In this study, we have designed a new broadband seismometer based on a short-period seismometer whose resonant frequency is 1 Hz for future geophysical missions. The seismometer is of an electromagnetic type, light weight, small size and has good shock-durability, making it suitable for being loaded onto a penetrator, which is a small, hard-landing probe developed in the LUNAR-A Project, a previous canceled mission. We modified the short-period seismometer so as to have a flat frequency response above about 0.1 Hz and the detection limit could be lowered to cover frequencies below the frequency. This enlargement of the frequency band will allow us to investigate moonquakes for lower frequency components in which waveforms are less distorted because strong scattering due to fractured structures near the lunar surface is likely to be suppressed. The modification was achieved simply by connecting a feedback circuit to the seismometer, without making any mechanical changes to the short-period sensor. We have confirmed that the broadband seismometer exhibits the frequency response as designed and allows us to observe long-period components of small ground motions. Methods to improve the performance of the broadband seismometer from the current design are also discussed. These developments should promise to increase the opportunity for application of this small and tough seismometer in various planetary seismological missions.
The damping of seismic waves and its determination from reflection seismograms
NASA Technical Reports Server (NTRS)
Engelhard, L.
1979-01-01
The damping in theoretical waveforms is described phenomenologically and a classification is proposed. A method for studying the Earth's crust was developed which includes this damping as derived from reflection seismograms. Seismic wave propagation by absorption, attenuation of seismic waves by scattering, and dispersion relations are considered. Absorption of seismic waves within the Earth as well as reflection and transmission of elastic waves seen through boundary layer absorption are also discussed.
NASA Astrophysics Data System (ADS)
Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.
2016-12-01
The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.
A New Design of Seismic Stations Deployed in South Tyrol
NASA Astrophysics Data System (ADS)
Melichar, P.; Horn, N.
2007-05-01
When designing the seismic network in South Tyrol, the seismic service of Austria and the Civil defense in South Tyrol combined more that 10 years experience in running seismic networks and private communication systems. In recent years the high data return rate of > 99% and network uptime of > 99.% is achieved by the combination of high quality station design and equipment, and the use of the Antelope data acquisition and processing software which comes with suite of network monitoring & alerting tools including Nagios, etc. The new Data Center is located in city of Bolzano and is connected to the other Data Centers in Austria, Switzerland, and Italy for data back up purposes. Each Data Center uses also redundant communication system if the primary system fails. When designing the South Tyrol network, new improvements were made in seismometer installations, grounding, lighting protection and data communications in order to improve quality of data recorded as well as network up-time, and data return. The new 12 stations are equipped with 6 Channels Q330+PB14f connected to STS2 + EpiSensor sensor. One of the key achievements was made in the grounding concept for the whole seismic station - and aluminum boxes were introduced which delivered Faraday cage isolation. Lightning protection devices are used for the equipment inside the aluminum housing where seismometer and data logger are housed. For the seismometer cables a special shielding was introduced. The broadband seismometer and strong-motion sensor are placed on a thick glass plate and therefore isolated from the ground. The precise seismometer orientation was done by a special groove on the glass plate and in case of a strong earthquake; the seismometer is tide up to the base plate. Temperature stability was achieved by styrofoam sheets inside the seismometer aluminum protection box.
The Evolving Role of Field and Laboratory Seismic Measurements in Geotechnical Engineering
NASA Astrophysics Data System (ADS)
Stokoe, K. H.
2017-12-01
The geotechnical engineering has been faced with the problem of characterizing geological materials for site-specific design in the built environment since the profession began. When one of the design requirements included determining the dynamic response of important and critical facilities to earthquake shaking or other types of dynamic loads, seismically-based measurements in the field and laboratory became important tools for direct characterization of the stiffnesses and energy dissipation (material damping) of these materials. In the 1960s, field seismic measurements using small-strain body waves were adapted from exploration geophysics. At the same time, laboratory measurements began using dynamic, torsional, resonant-column devices to measure shear stiffness and material damping in shear. The laboratory measurements also allowed parameters such as material type, confinement state, and nonlinear straining to be evaluated. Today, seismic measurements are widely used and evolving because: (1) the measurements have a strong theoretical basis, (2) they can be performed in the field and laboratory, thus forming an important link between these measurements, and (3) in recent developments in field testing involving surface waves, they are noninvasive which makes them cost effective in comparison to other methods. Active field seismic measurements are used today over depths ranging from about 5 to 1000 m. Examples of shear-wave velocity (VS) profiles evaluated using boreholes, penetrometers, suspension logging, and Rayleigh-type surface waves are presented. The VS measurements were performed in materials ranging from uncemented soil to unweathered rock. The coefficients of variation (COVs) in the VS profiles are generally less than 0.15 over sites with surface areas of 50 km2 or more as long as material types are not laterally mixed. Interestingly, the largest COVs often occur around layer boundaries which vary vertically. It is also interesting to observe how the stiffness of rock near the ground surface is generally overestimated. Finally, intact specimens of the geological materials recovered from many sites were tested dynamically in the laboratory. Values of VS measured in the field and laboratory are compared, and biases in VS at soil versus rock sites are shown to exhibit opposite trends.
An Ensemble Approach for Improved Short-to-Intermediate-Term Seismic Potential Evaluation
NASA Astrophysics Data System (ADS)
Yu, Huaizhong; Zhu, Qingyong; Zhou, Faren; Tian, Lei; Zhang, Yongxian
2017-06-01
Pattern informatics (PI), load/unload response ratio (LURR), state vector (SV), and accelerating moment release (AMR) are four previously unrelated subjects, which are sensitive, in varying ways, to the earthquake's source. Previous studies have indicated that the spatial extent of the stress perturbation caused by an earthquake scales with the moment of the event, allowing us to combine these methods for seismic hazard evaluation. The long-range earthquake forecasting method PI is applied to search for the seismic hotspots and identify the areas where large earthquake could be expected. And the LURR and SV methods are adopted to assess short-to-intermediate-term seismic potential in each of the critical regions derived from the PI hotspots, while the AMR method is used to provide us with asymptotic estimates of time and magnitude of the potential earthquakes. This new approach, by combining the LURR, SV and AMR methods with the choice of identified area of PI hotspots, is devised to augment current techniques for seismic hazard estimation. Using the approach, we tested the strong earthquakes occurred in Yunnan-Sichuan region, China between January 1, 2013 and December 31, 2014. We found that most of the large earthquakes, especially the earthquakes with magnitude greater than 6.0 occurred in the seismic hazard regions predicted. Similar results have been obtained in the prediction of annual earthquake tendency in Chinese mainland in 2014 and 2015. The studies evidenced that the ensemble approach could be a useful tool to detect short-to-intermediate-term precursory information of future large earthquakes.
Optimized suppression of coherent noise from seismic data using the Karhunen-Loève transform
NASA Astrophysics Data System (ADS)
Montagne, Raúl; Vasconcelos, Giovani L.
2006-07-01
Signals obtained in land seismic surveys are usually contaminated with coherent noise, among which the ground roll (Rayleigh surface waves) is of major concern for it can severely degrade the quality of the information obtained from the seismic record. This paper presents an optimized filter based on the Karhunen-Loève transform for processing seismic images contaminated with ground roll. In this method, the contaminated region of the seismic record, to be processed by the filter, is selected in such way as to correspond to the maximum of a properly defined coherence index. The main advantages of the method are that the ground roll is suppressed with negligible distortion of the remnant reflection signals and that the filtering procedure can be automated. The image processing technique described in this study should also be relevant for other applications where coherent structures embedded in a complex spatiotemporal pattern need to be identified in a more refined way. In particular, it is argued that the method is appropriate for processing optical coherence tomography images whose quality is often degraded by coherent noise (speckle).