Sample records for fault model aliasing

  1. A new multiscale noise tuning stochastic resonance for enhanced fault diagnosis in wind turbine drivetrains

    NASA Astrophysics Data System (ADS)

    Hu, Bingbing; Li, Bing

    2016-02-01

    It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.

  2. On the sensitivity of transtensional versus transpressional tectonic regimes to remote dynamic triggering by Coulomb failure

    USGS Publications Warehouse

    Hill, David P.

    2015-01-01

     Accumulating evidence, although still strongly spatially aliased, indicates that although remote dynamic triggering of small-to-moderate (Mw<5) earthquakes can occur in all tectonic settings, transtensional stress regimes with normal and subsidiary strike-slip faulting seem to be more susceptible to dynamic triggering than transpressional regimes with reverse and subsidiary strike-slip faulting. Analysis of the triggering potential of Love- and Rayleigh-wave dynamic stresses incident on normal, reverse, and strike-slip faults assuming Andersonian faulting theory and simple Coulomb failure supports this apparent difference for rapid-onset triggering susceptibility.

  3. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  4. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  5. Cartographic symbol library considering symbol relations based on anti-aliasing graphic library

    NASA Astrophysics Data System (ADS)

    Mei, Yang; Li, Lin

    2007-06-01

    Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.

  6. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  8. Probing the Spatio-Temporal Characteristics of Temporal Aliasing Errors and their Impact on Satellite Gravity Retrievals

    NASA Astrophysics Data System (ADS)

    Wiese, D. N.; McCullough, C. M.

    2017-12-01

    Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.

  9. Sampling frequency for water quality variables in streams: Systems analysis to quantify minimum monitoring rates.

    PubMed

    Chappell, Nick A; Jones, Timothy D; Tych, Wlodek

    2017-10-15

    Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Moving microphone arrays to reduce spatial aliasing in the beamforming technique: theoretical background and numerical investigation.

    PubMed

    Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello

    2008-12-01

    This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.

  11. Fluid Motion and the Toroidal Magnetic Field Near the Top of Earth's Liquid Outer Core.

    NASA Astrophysics Data System (ADS)

    Celaya, Michael Augustine

    This work considers two unresolved problems central to the study of Earth's deep interior: (1) What is the surface flow of the complete three dimensional motion sustaining the geomagnetic field in the fluid outer core? (2) How strong is the toroidal component of that field just beneath the mantle inside the core?. A solution of these problems is necessary to achieve even a basic understanding of magnetic field generation, and core-mantle interactions. Progress in solving (1) is made by extending previous attempts to resolve the core surface flow, and identifying obstacles which lead to distorted solutions. The extension relaxes the steady motions constraint. This permits more realistic solutions which should resemble more closely the real Earth flow. A difficulty with the assumption of steady flow is that if the real motion is unsteady, as it is likely to be, then steady models will suffer from aliasing. Aliased solutions can be highly corrupted. The effects of aliasing incurred through model underparametrization are explored. It is found that flow spectral energy must fall rapidly with increasing degree to escape aliasing's distortion. Damping does not appear to remedy the problem, but in fact obscures it by forcing the solution to converge upon a single, but possibly still aliased estimate. Inversions of a magnetic field model for unsteady motions, indicate steady flows are indeed aliased in time. By comparison, unsteady flows appear free of aliasing and show significant temporal variation, changing by about 30% of their magnitude over 20 years. However, it appears that noise in the high degree secular variation (SV) data used to determine the flow acts as a further impediment to solving (1). Damping is shown to be effective in removing noise, but only once aliasing is no longer a factor and noise is restricted to that part of the SV which makes only a small contribution to the solution. To solve (2) the radial component of Ohm's law is inverted for the toroidal field (B_{T }) near the top of the corp. The flow, obtained as a solution to (1), is treated as a known quantity, as is the poloidal field. Solutions are sought which minimize the difference between observed and predicted poloidal main field at Earth's surface. As in problem (1), aliasing in space and time stand as potential impediments to good resolution of the toroidal field. Steady degree 10 models of B_{T} are obtained which display convergence in space and time without damping. Poloidal field noise, as well as sensitivity to the flow model used in the inversions, limit resolution of toroidal field geometry. Nevertheless, estimates indicate the magnitude of B_{T } does not exceed 8times 10^ {-5}T, or about half that of the poloidal field near the core surface. Such a low value favors weak -field dynamo models but does not necessarily endorse a geostrophic force balance just beneath the mantle because partial_{r}B _{T} may be large enough to violate conditions required by geostrophy.

  12. On the aliasing of the solar cycle in the lower stratospheric tropical temperature

    NASA Astrophysics Data System (ADS)

    Kuchar, Ales; Ball, William T.; Rozanov, Eugene V.; Stenke, Andrea; Revell, Laura; Miksovsky, Jiri; Pisoft, Petr; Peter, Thomas

    2017-09-01

    The double-peaked response of the tropical stratospheric temperature profile to the 11 year solar cycle (SC) has been well documented. However, there are concerns about the origin of the lower peak due to potential aliasing with volcanic eruptions or the El Niño-Southern Oscillation (ENSO) detected using multiple linear regression analysis. We confirm the aliasing using the results of the chemistry-climate model (CCM) SOCOLv3 obtained in the framework of the International Global Atmospheric Chemisty/Stratosphere-troposphere Processes And their Role in Climate Chemistry-Climate Model Initiative phase 1. We further show that even without major volcanic eruptions included in transient simulations, the lower stratospheric response exhibits a residual peak when historical sea surface temperatures (SSTs)/sea ice coverage (SIC) are used. Only the use of climatological SSTs/SICs in addition to background stratospheric aerosols removes volcanic and ENSO signals and results in an almost complete disappearance of the modeled solar signal in the lower stratospheric temperature. We demonstrate that the choice of temporal subperiod considered for the regression analysis has a large impact on the estimated profile signal in the lower stratosphere: at least 45 consecutive years are needed to avoid the large aliasing effect of SC maxima with volcanic eruptions in 1982 and 1991 in historical simulations, reanalyses, and observations. The application of volcanic forcing compiled for phase 6 of the Coupled Model Intercomparison Project (CMIP6) in the CCM SOCOLv3 reduces the warming overestimation in the tropical lower stratosphere and the volcanic aliasing of the temperature response to the SC, although it does not eliminate it completely.

  13. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-07-01

    Current temporal gravity field solutions from Gravity Recovery and Climate Experiment (GRACE) suffer from temporal aliasing errors due to undersampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean) and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high-resolution temporal gravity fields from future gravity missions such as GRACE Follow-On and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parametrize ocean tide parameters of the eight main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from 1 to 3 yr leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  14. Treatment of ocean tide aliasing in the context of a next generation gravity field mission

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Pail, Roland

    2018-04-01

    Current temporal gravity field solutions from GRACE suffer from temporal aliasing errors due to under-sampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean), and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high resolution temporal gravity fields from future gravity missions such as GRACE Follow-on and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parameterize ocean tide parameters of the 8 main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from one to three years leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 per cent and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.

  15. De-aliasing for signal restoration in Propeller MR imaging.

    PubMed

    Chiu, Su-Chin; Chang, Hing-Chiu; Chu, Mei-Lan; Wu, Ming-Long; Chung, Hsiao-Wen; Lin, Yi-Ru

    2017-02-01

    Objects falling outside of the true elliptical field-of-view (FOV) in Propeller imaging show unique aliasing artifacts. This study proposes a de-aliasing approach to restore the signal intensities in Propeller images without extra data acquisition. Computer simulation was performed on the Shepp-Logan head phantom deliberately placed obliquely to examine the signal aliasing. In addition, phantom and human imaging experiments were performed using Propeller imaging with various readouts on a 3.0 Tesla MR scanner. De-aliasing using the proposed method was then performed, with the first low-resolution single-blade image used to find out the aliasing patterns in all the single-blade images, followed by standard Propeller reconstruction. The Propeller images without and with de-aliasing were compared. Computer simulations showed signal loss at the image corners along with aliasing artifacts distributed along directions corresponding to the rotational blades, consistent with clinical observations. The proposed de-aliasing operation successfully restored the correct images in both phantom and human experiments. The de-aliasing operation is an effective adjunct to Propeller MR image reconstruction for retrospective restoration of aliased signals. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.

    PubMed

    Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi

    2017-05-01

    When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.

  17. Treatment of temporal aliasing effects in the context of next generation satellite gravimetry missions

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Pail, Roland

    2017-09-01

    Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.

  18. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, L.; Wang, G.; Wessel, P.

    2017-12-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3cm×3cm) to handprint (e.g., 10cm×10cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain portable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem and implement an anti-aliasing procedure of regridding dense TLS data. The TLS data collected in the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as anti-aliasing filters. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  19. Spectral decontamination of a real-time helicopter simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    Nonlinear mathematical models of a rotor system, referred to as rotating blade-element models, produce steady-state, high-frequency harmonics of significant magnitude. In a discrete simulation model, certain of these harmonics may be incompatible with realistic real-time computational constraints because of their aliasing into the operational low-pass region. However, the energy is an aliased harmonic may be suppressed by increasing the computation rate of an isolated, causal nonlinearity and using an appropriate filter. This decontamination technique is applied to Sikorsky's real-time model of the Black Hawk helicopter, as supplied to NASA for handling-qualities investigations.

  20. Simulation of sampling effects in FPAs

    NASA Astrophysics Data System (ADS)

    Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.

    1991-09-01

    The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.

  1. Demonstrating the Value of Fine-resolution Optical Data for Minimising Aliasing Impacts on Biogeochemical Models of Surface Waters

    NASA Astrophysics Data System (ADS)

    Chappell, N. A.; Jones, T.; Young, P.; Krishnaswamy, J.

    2015-12-01

    There is increasing awareness that under-sampling may have resulted in the omission of important physicochemical information present in water quality signatures of surface waters - thereby affecting interpretation of biogeochemical processes. For dissolved organic carbon (DOC) and nitrogen this under-sampling can now be avoided using UV-visible spectroscopy measured in-situ and continuously at a fine-resolution e.g. 15 minutes ("real time"). Few methods are available to extract biogeochemical process information directly from such high-frequency data. Jones, Chappell & Tych (2014 Environ Sci Technol: 13289-97) developed one such method using optically-derived DOC data based upon a sophisticated time-series modelling tool. Within this presentation we extend the methodology to quantify the minimum sampling interval required to avoid distortion of model structures and parameters that describe fundamental biogeochemical processes. This shifting of parameters which results from under-sampling is called "aliasing". We demonstrate that storm dynamics at a variety of sites dominate over diurnal and seasonal changes and that these must be characterised by sampling that may be sub-hourly to avoid aliasing. This is considerably shorter than that used by other water quality studies examining aliasing (e.g. Kirchner 2005 Phys Rev: 069902). The modelling approach presented is being developed into a generic tool to calculate the minimum sampling for water quality monitoring in systems driven primarily by hydrology. This is illustrated with fine-resolution, optical data from watersheds in temperate Europe through to the humid tropics.

  2. Shearlet transform in aliased ground roll attenuation and its comparison with f-k filtering and curvelet transform

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-06-01

    Ground roll, which is a Rayleigh surface wave that exists in land seismic data, may mask reflections. Sometimes ground roll is spatially aliased. Attenuation of aliased ground roll is of importance in seismic data processing. Different methods have been developed to attenuate ground roll. The shearlet transform is a directional and multidimensional transform that generates subimages of an input image in different directions and scales. Events with different dips are separated in these subimages. In this study, the shearlet transform is used to attenuate the aliased ground roll. To do this, a shot record is divided into several segments, and the appropriate mute zone is defined for all segments. The shearlet transform is applied to each segment. The subimages related to the non-aliased and aliased ground roll are identified by plotting the energy distributions of subimages with visual checking. Then, muting filters are used on selected subimages. The inverse shearlet transform is applied to the filtered segment. This procedure is repeated for all segments. Finally, all filtered segments are merged using the Hanning window. This method of aliased ground roll attenuation was tested on a synthetic dataset and a field shot record from the west of Iran. The synthetic shot record included strong aliased ground roll, whereas the field shot record did not. To produce the strong aliased ground roll on the field shot record, the data were resampled in the offset direction from 30 to 60 m. To show the performance of the shearlet transform in attenuating the aliased ground roll, we compared the shearlet transform with the f-k filtering and curvelet transform. We showed that the performance of the shearlet transform in the aliased ground roll attenuation is better than that of the f-k filtering and curvelet transform in both the synthetic and field shot records. However, when the dip and frequency content of the aliased ground roll are the same as the reflections, ability of the shearlet transform is limited in attenuating the aliased ground roll.

  3. 76 FR 21628 - Implementation of Additional Changes From the Annual Review of the Entity List; Removal of Person...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... Engineering Physics.'' The changes included revising the entry to add additional aliases for that entry. The... listing the aliases as separate aliases for the Chinese Academy of Engineering Physics. China (1) Chinese Academy of Engineering Physics, a.k.a., the following nineteen aliases: --Ninth Academy; --Southwest...

  4. Seismic Reflection Imaging of Detachment Faulting at 13°N on the Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Falder, M.; Reston, T. J.; Peirce, C.; Simão, N.; MacLeod, C. J.; Searle, R. C.

    2016-12-01

    The observation of domal corrugated surfaces at slow spreading ridges less than two decades ago, has dramatically challenged our understanding of seafloor spreading. These `oceanic core complexes' are believed to be caused by large-scale detachment faults which accommodate plate separation during periods when melt supply is low or absent entirely. Despite increasing recognition of their importance, the mechanics of, and interactions between, detachment faults at OCCs is not well understood. In Jan-Feb 2016, seismic reflection and refraction data were acquired across the 13N OCCs. The twelve-airgun array seismic source was recorded by a 3000m-long streamer, with shots fired with the full array at either 20 s intervals, or with half the array in a "flip flop" fashion every 10 s. A shorter firing rate results in significantly less spatial aliasing and enhances the performance of the F-K domain filtering. Here we present preliminary seismic reflection images of the 13N region. The currently active 13° 20'N detachment fault is imaged continuing downwards from the smooth fault plane exposed at the seabed. Away from the fault, and between the two OCCs in the area, fewer subsurface structures are observed, which may either represent an actual lack of sharp acoustic contrasts or be as a result of the challenging imaging conditions. Acoustic energy scattered by rough bathymetry both within and out of plane of section is the main challenge of seismic reflection imaging in this area and various strategies are being investigated for its attenuation, including prediction based on high-resolution bathymetry acquired.

  5. Anti-aliasing Wiener filtering for wave-front reconstruction in the spatial-frequency domain for high-order astronomical adaptive-optics systems.

    PubMed

    Correia, Carlos M; Teixeira, Joel

    2014-12-01

    Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.

  6. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    NASA Astrophysics Data System (ADS)

    Xiong, Lin.; Wang, Guoquan; Wessel, Paul

    2017-03-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  7. Dynamic strains for earthquake source characterization

    USGS Publications Warehouse

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  8. Development of a 3D VHR seismic reflection system for lacustrine settings - a case study in Lake Geneva, Switzerland

    NASA Astrophysics Data System (ADS)

    Scheidhauer, M.; Dupuy, D.; Marillier, F.; Beres, M.

    2003-04-01

    For better understanding of geologic processes in complex lacustrine settings, detailed information on geologic features is required. In many cases, the 3D seismic method may be the only appropriate approach. The aim of this work is to develop an efficient very high-resolution 3D seismic reflection system for lake studies. In Lake Geneva, Switzerland, near the city of Lausanne, past high-resolution investigations revealed a complex fault zone, which was subsequently chosen for testing our new system of three 24-channel streamers and integrated differential GPS (dGPS) positioning. A survey, carried out in 9 days in August 2001, covered an area of 1500^om x 675^om and comprised 180 CMP lines sailed perpendicular to the fault strike always updip, since otherwise the asymmetric system would result in different stacks for opposite directions. Accurate navigation and shot spacing of 5^om is achieved with a specially developed navigation and shot-triggering software that uses differential GPS onboard and a reference base close to the lake shore. Hydrophone positions could be accurately (<^o0.5^om) calculated with the aid of three additional dGPS antennas mounted on rafts attached to the streamer tails. Towed at a distance of only 75^om behind the vessel, they allowed determination of possible feathering due to cross-line currents or small course variations. The multi-streamer system uses two retractable booms deployed on each side of the boat and rest on floats. They separate the two outer streamers from the one in the center by a distance of 7.5^om. Combined with a receiver spacing of 2.5^om, the bin dimension of the 3D data becomes 3.75^om in cross-line and 1.25^om in inline direction. Potential aliasing problems from steep reflectors up to 30^o within the fault zone motivated the use of a 15/15 cu. in. double-chamber bubble-canceling Mini G.I. air gun (operated at 80^obars and 1^om depth). Although its frequencies do not exceed 650^o Hz, it combines a penetration of non-aliased signal to depths of 400^om with a best vertical resolution of 1.15^om. The multi-streamer system allows acquisition of high quality data, which already after conventional 3D processing show particularly clear images of the fault zone and the overlying sediments in all directions. Prestack depth migration can further improve data quality and is more appropriate for subsequent geologic interpretation.

  9. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  10. Adaptive attenuation of aliased ground roll using the shearlet transform

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  11. The N/Rev phenomenon in simulating a blade-element rotor system

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    When a simulation model produces frequencies that are beyond the bandwidth of a discrete implementation, anomalous frequencies appear within the bandwidth. Such is the case with blade element models of rotor systems, which are used in the real time, man in the loop simulation environment. Steady state, high frequency harmonics generated by these models, whether aliased or not, obscure piloted helicopter simulation responses. Since these harmonics are attenuated in actual rotorcraft (e.g., because of structural damping), a faithful environment representation for handling qualities purposes may be created from the original model by using certain filtering techniques, as outlined here. These include harmonic consideration, conventional filtering, and decontamination. The process of decontamination is of special interest because frequencies of importance to simulation operation are not attenuated, whereas superimposed aliased harmonics are.

  12. Reconstruction of full high-resolution HSQC using signal split in aliased spectra.

    PubMed

    Foroozandeh, Mohammadali; Jeannerat, Damien

    2015-11-01

    Resolution enhancement is a long-sought goal in NMR spectroscopy. In conventional multidimensional NMR experiments, such as the (1) H-(13) C HSQC, the resolution in the indirect dimensions is typically 100 times lower as in 1D spectra because it is limited by the experimental time. Reducing the spectral window can significantly increase the resolution but at the cost of ambiguities in frequencies as a result of spectral aliasing. Fortunately, this information is not completely lost and can be retrieved using methods in which chemical shifts are encoded in the aliased spectra and decoded after processing to reconstruct high-resolution (1) H-(13) C HSQC spectrum with full spectral width and a resolution similar to that of 1D spectra. We applied a new reconstruction method, RHUMBA (reconstruction of high-resolution using multiplet built on aliased spectra), to spectra obtained from the differential evolution for non-ambiguous aliasing-HSQC and the new AMNA (additional modulation for non-ambiguous aliasing)-HSQC experiments. The reconstructed spectra significantly facilitate both manual and automated spectral analyses and structure elucidation based on heteronuclear 2D experiments. The resolution is enhanced by two orders of magnitudes without the usual complications due to spectral aliasing. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Aliased tidal errors in TOPEX/POSEIDON sea surface height data

    NASA Technical Reports Server (NTRS)

    Schlax, Michael G.; Chelton, Dudley B.

    1994-01-01

    Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.

  14. Evaluating Health Outcomes of Criminal Justice Populations Using Record Linkage: The Importance of Aliases

    ERIC Educational Resources Information Center

    Larney, Sarah; Burns, Lucy

    2011-01-01

    Individuals in contact with the criminal justice system are a key population of concern to public health. Record linkage studies can be useful for studying health outcomes for this group, but the use of aliases complicates the process of linking records across databases. This study was undertaken to determine the impact of aliases on sensitivity…

  15. Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms

    DTIC Science & Technology

    2009-09-01

    spread function In the literature these types of algorithms are sometimes hcluded under the broad umbrella of superresolution . However, in the current...We use one of these patterns to visually demonstrate successful de-aliasing 15. SUBJECT TERMS Image de-aliasing Superresolution Microscanning Image...undersampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution . However, in

  16. Viewing-zone enlargement method for sampled hologram that uses high-order diffraction.

    PubMed

    Mishina, Tomoyuki; Okui, Makoto; Okano, Fumio

    2002-03-10

    We demonstrate a method of enlarging the viewing zone for holography that has holograms with a pixel structure. First, aliasing generated by the sampling of a hologram by pixel is described. Next the high-order diffracted beams reproduced from the hologram that contains aliasing are explained. Finally, we show that the viewing zone can be enlarged by combining these high-order reconstructed beams from the hologram with aliasing.

  17. RADIAL VELOCITY PLANETS DE-ALIASED: A NEW, SHORT PERIOD FOR SUPER-EARTH 55 Cnc e

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Rebekah I.; Fabrycky, Daniel C., E-mail: rdawson@cfa.harvard.ed, E-mail: daniel.fabrycky@gmail.co

    2010-10-10

    Radial velocity measurements of stellar reflex motion have revealed many extrasolar planets, but gaps in the observations produce aliases, spurious frequencies that are frequently confused with the planets' orbital frequencies. In the case of Gl 581 d, the distinction between an alias and the true frequency was the distinction between a frozen, dead planet and a planet possibly hospitable to life. To improve the characterization of planetary systems, we describe how aliases originate and present a new approach for distinguishing between orbital frequencies and their aliases. Our approach harnesses features in the spectral window function to compare the amplitude andmore » phase of predicted aliases with peaks present in the data. We apply it to confirm prior alias distinctions for the planets GJ 876 d and HD 75898 b. We find that the true periods of Gl 581 d and HD 73526 b/c remain ambiguous. We revise the periods of HD 156668 b and 55 Cnc e, which were afflicted by daily aliases. For HD 156668 b, the correct period is 1.2699 days and the minimum mass is (3.1 {+-} 0.4) M{sub +}. For 55 Cnc e, the correct period is 0.7365 days-the shortest of any known planet-and the minimum mass is (8.3 {+-} 0.3) M{sub +}. This revision produces a significantly improved five-planet Keplerian fit for 55 Cnc, and a self-consistent dynamical fit describes the data just as well. As radial velocity techniques push to ever-smaller planets, often found in systems of multiple planets, distinguishing true periods from aliases will become increasingly important.« less

  18. Estimating tropospheric phase delay in SAR interferograms using Global Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Doin, M.; Lasserre, C.; Peltzer, G.; Cavalie, O.; Doubre, C.

    2008-12-01

    The main limiting factor on the accuracy of Interferometric SAR (InSAR) measurements comes from phase propagation delays through the Earth's troposphere. The delay can be divided into a stratified component, which correlates with the topography and often dominates the tropospheric signal in InSAR data, and a turbulent component. The stratified delay can be expressed as a function of atmospheric pressure P, temperature T, and water vapor partial pressure e vertical profiles. We compare the stratified delay computed using results from global atmospheric models with the topography-dependent signal observed in interferograms covering three test areas in different geographic and climatic environments: Lake Mead, Nevada, USA, the Haiyuan fault area, Gansu, China, and Afar, Republic of Djibouti. For each site we compute a multi-year series of interferograms. The phase-elevation ratio is estimated for each interferogram and the series is inverted to form a timeline of delay-elevation ratios characterizing each epoch of data acquisition. InSAR derived ratios are in good agreement with the ratios computed from global atmospheric models. This agreement shows that both estimations of the delay-elevation ratio can be used to perform a first order correction of the InSAR phase. Seasonal variations of the atmosphere significantly affect the phase delay throughout the year, aliasing the results of time series inversions using temporal smoothing or data stacking when the acquisitions are not evenly distributed in time. This is particularly critical when the spatial shape of the signal of interest correlates with topography. In the Lake Mead area, the irregular temporal sampling of our SAR data results in an interannual bias of amplitude ~2~cm on range change estimates. In the Haiyuan Fault area, the coarse and uneven data sampling results in a bias of up to ~0.5~cm/yr on the line of sight velocity across the fault. In the Afar area, the seasonal signal exceeds the deformation signal in the phase time series. In all cases, correcting interferograms from the stratified delay helps removing these biases. Finally we suggest that the phase delay correction can potentially be improved by introducing a non-linear dependance to the elevation, as consistent non-linear relationships are observed in many interferograms as well as in global atmospheric models.

  19. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  20. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  1. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.

  2. The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors

    NASA Astrophysics Data System (ADS)

    Joseph, Peter M.

    1980-06-01

    At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.

  3. Dynamic change in mitral regurgitant orifice area: comparison of color Doppler echocardiographic and electromagnetic flowmeter-based methods in a chronic animal model.

    PubMed

    Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J

    1995-08-01

    The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.

  4. Influence of running stride frequency in heart rate variability analysis during treadmill exercise testing.

    PubMed

    Bailón, Raquel; Garatachea, Nuria; de la Iglesia, Ignacio; Casajús, Jose Antonio; Laguna, Pablo

    2013-07-01

    The analysis and interpretation of heart rate variability (HRV) during exercise is challenging not only because of the nonstationary nature of exercise, the time-varying mean heart rate, and the fact that respiratory frequency exceeds 0.4 Hz, but there are also other factors, such as the component centered at the pedaling frequency observed in maximal cycling tests, which may confuse the interpretation of HRV analysis. The objectives of this study are to test the hypothesis that a component centered at the running stride frequency (SF) appears in the HRV of subjects during maximal treadmill exercise testing, and to study its influence in the interpretation of the low-frequency (LF) and high-frequency (HF) components of HRV during exercise. The HRV of 23 subjects during maximal treadmill exercise testing is analyzed. The instantaneous power of different HRV components is computed from the smoothed pseudo-Wigner-Ville distribution of the modulating signal assumed to carry information from the autonomic nervous system, which is estimated based on the time-varying integral pulse frequency modulation model. Besides the LF and HF components, the appearance is revealed of a component centered at the running SF as well as its aliases. The power associated with the SF component and its aliases represents 22±7% (median±median absolute deviation) of the total HRV power in all the subjects. Normalized LF power decreases as the exercise intensity increases, while normalized HF power increases. The power associated with the SF does not change significantly with exercise intensity. Consideration of the running SF component and its aliases is very important in HRV analysis since stride frequency aliases may overlap with LF and HF components.

  5. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  6. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE PAGES

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...

    2016-12-05

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  7. Luma-chroma space filter design for subpixel-based monochrome image downsampling.

    PubMed

    Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng

    2013-10-01

    In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.

  8. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  9. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  10. A Simple Approach to Fourier Aliasing

    ERIC Educational Resources Information Center

    Foadi, James

    2007-01-01

    In the context of discrete Fourier transforms the idea of aliasing as due to approximation errors in the integral defining Fourier coefficients is introduced and explained. This has the positive pedagogical effect of getting to the heart of sampling and the discrete Fourier transform without having to delve into effective, but otherwise long and…

  11. Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Sharma, A.; And Others

    1996-01-01

    Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…

  12. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  13. 78 FR 69927 - In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... DEPARTMENT OF STATE [Public Notice 8527] In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Record...

  14. 75 FR 28849 - Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... DEPARTMENT OF STATE [Public Notice 7026] Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Records assembled in these...

  15. Anti-aliasing filter design on spaceborne digital receiver

    NASA Astrophysics Data System (ADS)

    Yu, Danru; Zhao, Chonghui

    2009-12-01

    In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.

  16. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  17. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  18. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  19. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  20. A simulation for gravity fine structure recovery from low-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.

  1. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  2. Infrared Sensor Readout Design

    DTIC Science & Technology

    1975-11-01

    Line Replaceable Unit LT Level Translator MRT Minimum Resolvable Temperature MTF Modulation Transfer Function PC Printed Circuit SCCCD Surface...reduced, not only will the aliased noise increase, but signal aliasing will also start to occur. Atlbe display level this means that sharp edges could...converted from a quantity ol charge to a voltage- level shift by the action ol the precharge pulse that presets the potential on the output diode node to

  3. Staggered Multiple-PRF Ultrafast Color Doppler.

    PubMed

    Posada, Daniel; Poree, Jonathan; Pellissier, Arnaud; Chayer, Boris; Tournoux, Francois; Cloutier, Guy; Garcia, Damien

    2016-06-01

    Color Doppler imaging is an established pulsed ultrasound technique to visualize blood flow non-invasively. High-frame-rate (ultrafast) color Doppler, by emissions of plane or circular wavefronts, allows severalfold increase in frame rates. Conventional and ultrafast color Doppler are both limited by the range-velocity dilemma, which may result in velocity folding (aliasing) for large depths and/or large velocities. We investigated multiple pulse-repetition-frequency (PRF) emissions arranged in a series of staggered intervals to remove aliasing in ultrafast color Doppler. Staggered PRF is an emission process where time delays between successive pulse transmissions change in an alternating way. We tested staggered dual- and triple-PRF ultrafast color Doppler, 1) in vitro in a spinning disc and a free jet flow, and 2) in vivo in a human left ventricle. The in vitro results showed that the Nyquist velocity could be extended to up to 6 times the conventional limit. We found coefficients of determination r(2) ≥ 0.98 between the de-aliased and ground-truth velocities. Consistent de-aliased Doppler images were also obtained in the human left heart. Our results demonstrate that staggered multiple-PRF ultrafast color Doppler is efficient for high-velocity high-frame-rate blood flow imaging. This is particularly relevant for new developments in ultrasound imaging relying on accurate velocity measurements.

  4. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  5. DNS load balancing in the CERN cloud

    NASA Astrophysics Data System (ADS)

    Reguero Naredo, Ignacio; Lobato Pardavila, Lorena

    2017-10-01

    Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.

  6. Angular oversampling with temporally offset layers on multilayer detectors in computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjölin, Martin, E-mail: martin.sjolin@mi.physics.kth.se; Danielsson, Mats

    2016-06-15

    Purpose: Today’s computed tomography (CT) scanners operate at an increasingly high rotation speed in order to reduce motion artifacts and to fulfill the requirements of dynamic acquisition, e.g., perfusion and cardiac imaging, with lower angular sampling rate as a consequence. In this paper, a simple method for obtaining angular oversampling when using multilayer detectors in continuous rotation CT is presented. Methods: By introducing temporal offsets between the measurement periods of the different layers on a multilayer detector, the angular sampling rate can be increased by a factor equal to the number of layers on the detector. The increased angular samplingmore » rate reduces the risk of producing aliasing artifacts in the image. A simulation of a detector with two layers is performed to prove the concept. Results: The simulation study shows that aliasing artifacts from insufficient angular sampling are reduced by the proposed method. Specifically, when imaging a single point blurred by a 2D Gaussian kernel, the method is shown to reduce the strength of the aliasing artifacts by approximately an order of magnitude. Conclusions: The presented oversampling method is easy to implement in today’s multilayer detectors and has the potential to reduce aliasing artifacts in the reconstructed images.« less

  7. Visualization of 3D CT-based anatomical models

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    Biomedical volumetric data visualization techniques for the exploration purposes are well developed. Most of the known methods are inappropriate for surgery simulation systems due to lack of realism. A segmented data visualization is a well-known approach for the visualization of the structured volumetric data. The research is focused on improvement of the segmented data visualization technique by the aliasing problems resolution and the use of material transparency modeling for better semitransparent structures rendering.

  8. Post-Fisherian Experimentation: From Physical to Virtual

    DOE PAGES

    Jeff Wu, C. F.

    2014-04-24

    Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.

  9. Determining Aliasing in Isolated Signal Conditioning Modules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The basic concept of aliasing is this: Converting analog data into digital data requires sampling the signal at a specific rate, known as the sampling frequency. The result of this conversion process is a new function, which is a sequence of digital samples. This new function has a frequency spectrum, which contains all the frequency components of the original signal. The Fourier transform mathematics of this process show that the frequency spectrum of the sequence of digital samples consists of the original signal s frequency spectrum plus the spectrum shifted by all the harmonics of the sampling frequency. If the original analog signal is sampled in the conversion process at a minimum of twice the highest frequency component contained in the analog signal, and if the reconstruction process is limited to the highest frequency of the original signal, then the reconstructed signal accurately duplicates the original analog signal. It is this process that can give birth to aliasing.

  10. Site Distribution and Aliasing Effects in the Inversion for Load Coefficients and Geocenter Motion from GPS Data

    NASA Technical Reports Server (NTRS)

    Wu, Xiaoping; Argus, Donald F.; Heflin, Michael B.; Ivins, Erik R.; Webb, Frank H.

    2002-01-01

    Precise GPS measurements of elastic relative site displacements due to surface mass loading offer important constraints on global surface mass transport. We investigate effects of site distribution and aliasing by higher-degree (n greater than or equal 2) loading terms on inversion of GPS data for n = 1 load coefficients and geocenter motion. Covariance and simulation analyses are conducted to assess the sensitivity of the inversion to aliasing and mismodeling errors and possible uncertainties in the n = 1 load coefficient determination. We found that the use of center-of-figure approximation in the inverse formulation could cause 10- 15% errors in the inverted load coefficients. n = 1 load estimates may be contaminated significantly by unknown higher-degree terms, depending on the load scenario and the GPS site distribution. The uncertainty in n = 1 zonal load estimate is at the level of 80 - 95% for two load scenarios.

  11. Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.

    2016-12-01

    Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. To connect the analytical Sturm-Liouville results to observations, a first-order forcing consistent with a seasonally aliased Draconic or nodal lunar period (27.21d aliased into 2.36y) is applied. This has a plausible rationale as it ties a latitudinal forcing cycle via a cross-product to the longitudinal terms in the Laplace formulation. The fitted results match the features of QBO both qualitatively and quantitatively; adding second-order terms due to other seasonally aliased lunar periods provides finer detail while remaining consistent with the physical model. Further, running symbolic regression machine learning experiments on the data provided a validation to the approach, as it discovered the same analytical form and fitted values as the first principles Laplace model. These results conflict with Lindzen's QBO model, in that his original formulation fell short of making the lunar connection, even though Lindzen himself asserted "it is unlikely that lunar periods could be produced by anything other than the lunar tidal potential".By applying a similar analytical approach to ENSO, we find that the tidal equations need to be replaced with a Mathieu-equation formulation consistent with describing a sloshing process in the thermocline depth. Adapting the hydrodynamic math of sloshing, we find a biennial modulation coupled with angular momentum forcing variations matching the Chandler wobble gives an impressive match over the measured ENSO range of 1880 until the present. Lunar tidal periods and an additional triaxial nutation of 14 year period provide additional fidelity. The caveat is a phase inversion of the biennial mode lasting from 1980 to 1996. The parsimony of these analytical models arises from applying only known cyclic forcing terms to fundamental wave equation formulations. This raises the possibility that both QBO and ENSO can be predicted years in advance, apart from a metastable biennial phase inversion in ENSO.

  12. Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)

    NASA Astrophysics Data System (ADS)

    Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael

    2017-04-01

    The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.

  13. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  14. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  15. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  16. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  17. On the use of kinetic energy preserving DG-schemes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Flad, David; Gassner, Gregor

    2017-12-01

    Recently, element based high order methods such as Discontinuous Galerkin (DG) methods and the closely related flux reconstruction (FR) schemes have become popular for compressible large eddy simulation (LES). Element based high order methods with Riemann solver based interface numerical flux functions offer an interesting dispersion dissipation behavior for multi-scale problems: dispersion errors are very low for a broad range of scales, while dissipation errors are very low for well resolved scales and are very high for scales close to the Nyquist cutoff. In some sense, the inherent numerical dissipation caused by the interface Riemann solver acts as a filter of high frequency solution components. This observation motivates the trend that element based high order methods with Riemann solvers are used without an explicit LES model added. Only the high frequency type inherent dissipation caused by the Riemann solver at the element interfaces is used to account for the missing sub-grid scale dissipation. Due to under-resolution of vortical dominated structures typical for LES type setups, element based high order methods suffer from stability issues caused by aliasing errors of the non-linear flux terms. A very common strategy to fight these aliasing issues (and instabilities) is so-called polynomial de-aliasing, where interpolation is exchanged with projection based on an increased number of quadrature points. In this paper, we start with this common no-model or implicit LES (iLES) DG approach with polynomial de-aliasing and Riemann solver dissipation and review its capabilities and limitations. We find that the strategy gives excellent results, but only when the resolution is such, that about 40% of the dissipation is resolved. For more realistic, coarser resolutions used in classical LES e.g. of industrial applications, the iLES DG strategy becomes quite inaccurate. We show that there is no obvious fix to this strategy, as adding for instance a sub-grid-scale models on top doesn't change much or in worst case decreases the fidelity even more. Finally, the core of this work is a novel LES strategy based on split form DG methods that are kinetic energy preserving. The scheme offers excellent stability with full control over the amount and shape of the added artificial dissipation. This premise is the main idea of the work and we will assess the LES capabilities of the novel split form DG approach when applied to shock-free, moderate Mach number turbulence. We will demonstrate that the novel DG LES strategy offers similar accuracy as the iLES methodology for well resolved cases, but strongly increases fidelity in case of more realistic coarse resolutions.

  18. Optoelectronic image scanning with high spatial resolution and reconstruction fidelity

    NASA Astrophysics Data System (ADS)

    Craubner, Siegfried I.

    2002-02-01

    In imaging systems the detector arrays deliver at the output time-discrete signals, where the spatial frequencies of the object scene are mapped into the electrical signal frequencies. Since the spatial frequency spectrum cannot be bandlimited by the front optics, the usual detector arrays perform a spatial undersampling and as a consequence aliasing occurs. A means to partially suppress the backfolded alias band is bandwidth limitation in the reconstruction low-pass, at the price of resolution loss. By utilizing a bilinear detector array in a pushbroom-type scanner, undersampling and aliasing can be overcome. For modeling the perception, the theory of discrete systems and multirate digital filter banks is applied, where aliasing cancellation and perfect reconstruction play an important role. The discrete transfer function of a bilinear array can be imbedded into the scheme of a second-order filter bank. The detector arrays already build the analysis bank and the overall filter bank is completed with the synthesis bank, for which stabilized inverse filters are proposed, to compensate for the low-pass characteristics and to approximate perfect reconstruction. The synthesis filter branch can be realized in a so-called `direct form,' or the `polyphase form,' where the latter is an expenditure-optimal solution, which gives advantages when implemented in a signal processor. This paper attempts to introduce well-established concepts of the theory of multirate filter banks into the analysis of scanning imagers, which is applicable in a much broader sense than for the problems addressed here. To the author's knowledge this is also a novelty.

  19. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    USGS Publications Warehouse

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using linear time interpolation for the resamplin

  20. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  1. Event Compression Using Recursive Least Squares Signal Processing.

    DTIC Science & Technology

    1980-07-01

    decimation of the Burstl signal with and without all-pole prefiltering to reduce aliasing . Figures 3.32a-c and 3.33a-c show the same examples but with 4/1...to reduce aliasing , w~t found that it did not improve the quality of the event compressed signals . If filtering must be performed, all-pole filtering...A-AO89 785 MASSACHUSETTS IN T OF TECH CAMBRIDGE RESEARCH LAB OF--ETC F/B 17/9 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES SIGNAL PROCESSI-ETC(t

  2. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  3. Some aspects of simultaneously flying Topex Follow-On in a Topex orbit with Geosat Follow-On in a Geosat orbit

    NASA Technical Reports Server (NTRS)

    Parke, Michael E.; Born, George; Mclaughlin, Craig

    1994-01-01

    The advantages of having Geosat Follow-On in a Geosat orbit flying simultaneously with Topex Follow-On in a Topex/Poseidon orbit are examined. The orbits are evaluated using two criteria. The first is the acute crossover angle. This angle should be at least 40 degrees in order to accurately resolve the slope of sea level at crossover locations. The second is tidal aliasing. In order to solve for tides, the largest constituents should not be aliased to a frequency lower than two cycles/year and should be at least one cycle discrete from one another and from exactly two cycles/year over the mission life. The results show that TFO and GFO in these orbits complement each other. Both satellites have large crossover angles over a wide latitude range. In addition, the Topex orbit has good aliasing characteristics for the M2 and P1 tides for which the Geosat orbit has difficulty.

  4. Harmonic analysis of electrified railway based on improved HHT

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.

  5. Azimuthal filter to attenuate ground roll noise in the F-kx-ky domain for land 3D-3C seismic data with uneven acquisition geometry

    NASA Astrophysics Data System (ADS)

    Arevalo-Lopez, H. S.; Levin, S. A.

    2016-12-01

    The vertical component of seismic wave reflections is contaminated by surface noise such as ground roll and secondary scattering from near surface inhomogeneities. A common method for attenuating these, unfortunately often aliased, arrivals is via velocity filtering and/or multichannel stacking. 3D-3C acquisition technology provides two additional sources of information about the surface wave noise that we exploit here: (1) areal receiver coverage, and (2) a pair of horizontal components recorded at the same location as the vertical component. Areal coverage allows us to segregate arrivals at each individual receiver or group of receivers by direction. The horizontal components, having much less compressional reflection body wave energy than the vertical component, provide a template of where to focus our energies on attenuating the surface wave arrivals. (In the simplest setting, the vertical component is a scaled 90 degree phase rotated version of the radial horizontal arrival, a potential third possible lever we have not yet tried to integrate.) The key to our approach is to use the magnitude of the horizontal components to outline a data-adaptive "velocity" filter region in the w-Kx-Ky domain. The big advantage for us is that even in the presence of uneven receiver geometries, the filter automatically tracks through aliasing without manual sculpting and a priori velocity and dispersion estimation. The method was applied to an aliased synthetic dataset based on a five layer earth model which also included shallow scatterers to simulate near-surface inhomogeneities and successfully removed both the ground roll and scatterers from the vertical component (Figure 1).

  6. On the wave number 2 eastward propagating quasi 2 day wave at middle and high latitudes

    NASA Astrophysics Data System (ADS)

    Gu, Sheng-Yang; Liu, Han-Li; Pedatella, N. M.; Dou, Xiankang; Liu, Yu

    2017-04-01

    The temperature and wind data sets from the ensemble data assimilation version of the Whole Atmosphere Community Climate Model + Data Assimilation Research Testbed (WACCM + DART) developed at the National Center for Atmospheric Research (NCAR) are utilized to study the seasonal variability of the eastward quasi 2 day wave (QTDW) with zonal wave number 2 (E2) during 2007. The aliasing ratio of E2 from wave number 3 (W3) in the synoptic WACCM data set is a constant value of 4 × 10-6% due to its uniform sampling pattern, whereas the aliasing is latitudinally dependent if the WACCM fields are sampled asynoptically based on the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) sampling. The aliasing ratio based on SABER sampling is 75% at 40°S during late January, where and when W3 peaks. The analysis of the synoptic WACCM data set shows that the E2 is in fact a winter phenomenon, which peaks in the stratosphere and lower mesosphere at high latitudes. In the austral winter period, the amplitudes of E2 can reach 10 K, 20 m/s, and 30 m/s for temperature, zonal, and meridional winds, respectively. In the boreal winter period, the wave perturbations are only one third as strong as those in austral winter. Diagnostic analysis also shows that the mean flow instabilities in the winter upper mesosphere polar region provide sources for the amplification of E2. This is different from the westward QTDWs, whose amplifications are related to the summer easterly jet. In addition, the E2 also peaks at lower altitude than the westward modes.

  7. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  8. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  9. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  10. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. T-phase and tsunami signals recorded by IMS hydrophone triplets during the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Matsumoto, H.; Haralabus, G.; Zampolli, M.; Ozel, N. M.; Yamada, T.; Mark, P. K.

    2016-12-01

    A hydrophone station of the International Monitoring System (IMS) of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) is used to estimate the back-azimuth of T-phase signals generated by the 2011 Tohoku earthquake. Among the 6 IMS hydrophone stations required by the Treaty, 5 stations consist of two triplets, with the exception of HA1 (Australia), which has only one. The hydrophones of each triplet are suspended in the SOFAR channel and arranged to form an equilateral triangle with each side being approximately two kilometers long. The waveforms from the Tohoku earthquake were received at HA11, located on Wake Island, which is located approximately 3100 km south-east of the earthquake epicenter. The frequency range used in the array analysis was chosen to be less than 0.375 Hz, which assumed the target phase velocity to be 1.5 km/s for T-phases. The T-phase signals that originated from the seismic source however show peaks in the frequency band above one Hz. As a result of the inter-element distances of 2 km, spatial aliasing is observed in the frequency-wavenumber analysis (F-K analysis) if the entire 100 Hz bandwidth of the hydrophones is used. This spatial aliasing is significant because the distance between hydrophones in the triplet is large in comparison to the ratio between the phase velocity of T-phase signals and the frequency. To circumvent this spatial aliasing problem, a three-step processing technique used in seismic array analysis is applied: (1) high-pass filtering above 1 Hz to retrieve the T-phase, followed by (2) extraction of the envelope of this signal to highlight the T-phase contribution, and finally (3) low-pass filtering of the envelope below 0.375 Hz. The F-K analysis provides accurate back-azimuth and slowness estimations without spatial aliasing. Deconvolved waveforms are also processed to retrieve tsunami components by using a three-pole model of the frequency-amplitude-phase (FAP) response below 0.1 Hz and the measured sensor response for higher frequencies. It is also shown that short-period pressure fluctuations recorded by the IMS hydrophones correspond to theoretical dispersion curves of tsunamis. Thus, short-period dispersive tsunami signals can be identified by the IMS hydrophone triplets.

  12. Assessment of terrestrial water contributions to polar motion from GRACE and hydrological models

    NASA Astrophysics Data System (ADS)

    Jin, S. G.; Hassan, A. A.; Feng, G. P.

    2012-12-01

    The hydrological contribution to polar motion is a major challenge in explaining the observed geodetic residual of non-atmospheric and non-oceanic excitations since hydrological models have limited input of comprehensive global direct observations. Although global terrestrial water storage (TWS) estimated from the Gravity Recovery and Climate Experiment (GRACE) provides a new opportunity to study the hydrological excitation of polar motion, the GRACE gridded data are subject to the post-processing de-striping algorithm, spatial gridded mapping and filter smoothing effects as well as aliasing errors. In this paper, the hydrological contributions to polar motion are investigated and evaluated at seasonal and intra-seasonal time scales using the recovered degree-2 harmonic coefficients from all GRACE spherical harmonic coefficients and hydrological models data with the same filter smoothing and recovering methods, including the Global Land Data Assimilation Systems (GLDAS) model, Climate Prediction Center (CPC) model, the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis products and European Center for Medium-Range Weather Forecasts (ECMWF) operational model (opECMWF). It is shown that GRACE is better in explaining the geodetic residual of non-atmospheric and non-oceanic polar motion excitations at the annual period, while the models give worse estimates with a larger phase shift or amplitude bias. At the semi-annual period, the GRACE estimates are also generally closer to the geodetic residual, but with some biases in phase or amplitude due mainly to some aliasing errors at near semi-annual period from geophysical models. For periods less than 1-year, the hydrological models and GRACE are generally worse in explaining the intraseasonal polar motion excitations.

  13. Spectral analysis of highly aliased sea-level signals

    NASA Astrophysics Data System (ADS)

    Ray, Richard D.

    1998-10-01

    Observing high-wavenumber ocean phenomena with a satellite altimeter generally calls for "along-track" analyses of the data: measurements along a repeating satellite ground track are analyzed in a point-by-point fashion, as opposed to spatially averaging data over multiple tracks. The sea-level aliasing problems encountered in such analyses can be especially challenging. For TOPEX/POSEIDON, all signals with frequency greater than 18 cycles per year (cpy), including both tidal and subdiurnal signals, are folded into the 0-18 cpy band. Because the tidal bands are wider than 18 cpy, residual tidal cusp energy, plus any subdiurnal energy, is capable of corrupting any low-frequency signal of interest. The practical consequences of this are explored here by using real sea-level measurements from conventional tide gauges, for which the true oceanographic spectrum is known and to which a simulated "satellite-measured" spectrum, based on coarsely subsampled data, may be compared. At many locations the spectrum is sufficently red that interannual frequencies remain unaffected. Intra-annual frequencies, however, must be interpreted with greater caution, and even interannual frequencies can be corrupted if the spectrum is flat. The results also suggest that whenever tides must be estimated directly from the altimetry, response methods of analysis are preferable to harmonic methods, even in nonlinear regimes; this will remain so for the foreseeable future. We concentrate on three example tide gauges: two coastal stations on the Malay Peninsula where the closely aliased K1 and Ssa tides are strong and at Canton Island where trapped equatorial waves are aliased.

  14. Blending of phased array data

    NASA Astrophysics Data System (ADS)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  15. Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins

    PubMed Central

    2013-01-01

    Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718

  16. GRAVSAT/GEOPAUSE covariance analysis including geopotential aliasing

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1975-01-01

    A conventional covariance analysis for the GRAVSAT/GEOPAUSE mission is described in which the uncertainties of approximately 200 parameters, including the geopotential coefficients to degree and order 12, are estimated over three different tracking intervals. The estimated orbital uncertainties for both GRAVSAT and GEOPAUSE reach levels more accurate than presently available. The adjusted measurement bias errors approach the mission goal. Survey errors in the low centimeter range are achieved after ten days of tracking. The ability of the mission to obtain accuracies of geopotential terms to (12, 12) one to two orders of magnitude superior to present accuracy levels is clearly shown. A unique feature of this report is that the aliasing structure of this (12, 12) field is examined. It is shown that uncertainties for unadjusted terms to (12, 12) still exert a degrading effect upon the adjusted error of an arbitrarily selected term of lower degree and order. Finally, the distribution of the aliasing from the unestimated uncertainty of a particular high degree and order geopotential term upon the errors of all remaining adjusted terms is listed in detail.

  17. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  19. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.

  20. Sampling and position effects in the Electronically Steered Thinned Array Radiometer (ESTAR)

    NASA Technical Reports Server (NTRS)

    Katzberg, Stephen J.

    1993-01-01

    A simple engineering level model of the Electronically Steered Thinned Array Radiometer (ESTAR) is developed that allows an identification of the major effects of the sampling process involved with this technique. It is shown that the ESTAR approach is sensitive to aliasing and has a highly non-uniform sensitivity profile. It is further shown that the ESTAR approach is strongly sensitive to position displacements of the low-density sampling antenna elements.

  1. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  2. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  3. The Fault Block Model: A novel approach for faulted gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ursin, J.R.; Moerkeseth, P.O.

    1994-12-31

    The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less

  4. Comparison of high resolution x-ray detectors with conventional FPDs using experimental MTFs and apodized aperture pixel design for reduced aliasing

    NASA Astrophysics Data System (ADS)

    Shankar, A.; Russ, M.; Vijayan, S.; Bednarek, D. R.; Rudin, S.

    2017-03-01

    Apodized Aperture Pixel (AAP) design, proposed by Ismailova et.al, is an alternative to the conventional pixel design. The advantages of AAP processing with a sinc filter in comparison with using other filters include non-degradation of MTF values and elimination of signal and noise aliasing, resulting in an increased performance at higher frequencies, approaching the Nyquist frequency. If high resolution small field-of-view (FOV) detectors with small pixels used during critical stages of Endovascular Image Guided Interventions (EIGIs) could also be extended to cover a full field-of-view typical of flat panel detectors (FPDs) and made to have larger effective pixels, then methods must be used to preserve the MTF over the frequency range up to the Nyquist frequency of the FPD while minimizing aliasing. In this work, we convolve the experimentally measured MTFs of an Microangiographic Fluoroscope (MAF) detector, (the MAF-CCD with 35μm pixels) and a High Resolution Fluoroscope (HRF) detector (HRF-CMOS50 with 49.5μm pixels) with the AAP filter and show the superiority of the results compared to MTFs resulting from moving average pixel binning and to the MTF of a standard FPD. The effect of using AAP is also shown in the spatial domain, when used to image an infinitely small point object. For detectors in neurovascular interventions, where high resolution is the priority during critical parts of the intervention, but full FOV with larger pixels are needed during less critical parts, AAP design provides an alternative to simple pixel binning while effectively eliminating signal and noise aliasing yet allowing the small FOV high resolution imaging to be maintained during critical parts of the EIGI.

  5. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.

  6. Optical and Radio Frequency Refractivity Fluctuations from High Resolution Point Sensors: Sea Breezes and Other Observations

    DTIC Science & Technology

    2007-03-01

    velocity and direction along with vertical velocities are derived from the measured time of flight for the ultrasonic signals (manufacture’s...data set. To prevent aliasing a wave must be sample at least twice per period so the Nyquist frequency is sn ff 2 = . 3. Sampling Requirements...an order of magnitude or more. To refine models or conduct climatologically studies for Cn2 requires direct measurements to identify the underlying

  7. Analysis of typical fault-tolerant architectures using HARP

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl

    1987-01-01

    Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.

  8. Power cepstrum technique with application to model helicopter acoustic data

    NASA Technical Reports Server (NTRS)

    Martin, R. M.; Burley, C. L.

    1986-01-01

    The application of the power cepstrum to measured helicopter-rotor acoustic data is investigated. A previously applied correction to the reconstructed spectrum is shown to be incorrect. For an exact echoed signal, the amplitude of the cepstrum echo spike at the delay time is linearly related to the echo relative amplitude in the time domain. If the measured spectrum is not entirely from the source signal, the cepstrum will not yield the desired echo characteristics and a cepstral aliasing may occur because of the effective sample rate in the frequency domain. The spectral analysis bandwidth must be less than one-half the echo ripple frequency or cepstral aliasing can occur. The power cepstrum editing technique is a useful tool for removing some of the contamination because of acoustic reflections from measured rotor acoustic spectra. The cepstrum editing yields an improved estimate of the free field spectrum, but the correction process is limited by the lack of accurate knowledge of the echo transfer function. An alternate procedure, which does not require cepstral editing, is proposed which allows the complete correction of a contaminated spectrum through use of both the transfer function and delay time of the echo process.

  9. Fast algorithm for the rendering of three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  10. A distributed fault-detection and diagnosis system using on-line parameter estimation

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1991-01-01

    The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.

  11. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  12. Acquisition of a full-resolution image and aliasing reduction for a spatially modulated imaging polarimeter with two snapshots

    PubMed Central

    Zhang, Jing; Yuan, Changan; Huang, Guohua; Zhao, Yinjun; Ren, Wenyi; Cao, Qizhi; Li, Jianying; Jin, Mingwu

    2018-01-01

    A snapshot imaging polarimeter using spatial modulation can encode four Stokes parameters allowing instantaneous polarization measurement from a single interferogram. However, the reconstructed polarization images could suffer a severe aliasing signal if the high-frequency component of the intensity image is prominent and occurs in the polarization channels, and the reconstructed intensity image also suffers reduction of spatial resolution due to low-pass filtering. In this work, a method using two anti-phase snapshots is proposed to address the two problems simultaneously. The full-resolution target image and the pure interference fringes can be obtained from the sum and the difference of the two anti-phase interferograms, respectively. The polarization information reconstructed from the pure interference fringes does not contain the aliasing signal from the high-frequency component of the object intensity image. The principles of the method are derived and its feasibility is tested by both computer simulation and a verification experiment. This work provides a novel method for spatially modulated imaging polarization technology with two snapshots to simultaneously reconstruct a full-resolution object intensity image and high-quality polarization components. PMID:29714224

  13. Sampling and Reconstruction of the Pupil and Electric Field for Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Smith, Jeffrey; Aronstein, David

    2012-01-01

    This technology is based on sampling considerations for a band-limited function, which has application to optical estimation generally, and to phase retrieval specifically. The analysis begins with the observation that the Fourier transform of an optical aperture function (pupil) can be implemented with minimal aliasing for Q values down to Q = 1. The sampling ratio, Q, is defined as the ratio of the sampling frequency to the band-limited cut-off frequency. The analytical results are given using a 1-d aperture function, and with the electric field defined by the band-limited sinc(x) function. Perfect reconstruction of the Fourier transform (electric field) is derived using the Whittaker-Shannon sampling theorem for 1

  14. Recovery of an evolving magnetic flux rope in the solar wind: Decomposing spatial and temporal variations from single-spacecraft data

    NASA Astrophysics Data System (ADS)

    Hasegawa, H.; Sonnerup, B.; Hu, Q.; Nakamura, T.

    2013-12-01

    We present a novel single-spacecraft data analysis method for decomposing spatial and temporal variations of physical quantities at points along the path of a spacecraft in spacetime. The method is designed for use in the reconstruction of slowly evolving two-dimensional, magneto-hydrostatic structures (Grad-Shafranov equilibria) in a space plasma. It is an extension of the one developed by Sonnerup and Hasegawa [2010] and Hasegawa et al. [2010], in which it was assumed that variations in the time series of data, recorded as the structures move past the spacecraft, are all due to spatial effects. In reality, some of the observed variations are usually caused by temporal evolution of the structure during the time it moves past the observing spacecraft; the information in the data about the spatial structure is aliased by temporal effects. The purpose here is to remove this time aliasing from the reconstructed maps of field and plasma properties. Benchmark tests are performed by use of synthetic data taken by a virtual spacecraft as it traverses, at a constant velocity, a slowly growing magnetic flux rope in a two-dimensional magnetohydrodynamic simulation of magnetic reconnection. These tests show that the new method can better recover the spacetime behavior of the flux rope than does the original version, in which time aliasing effects had not been removed. An application of the new method to a solar wind flux rope, observed by the ACE spacecraft, suggests that it was evolving in a significant way during the ~17 hour interval of the traversal. References Hasegawa, H., B. U. Ö. Sonnerup, and T. K. M. Nakamura (2010), Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event, J. Geophys. Res., 115, A11219, doi:10.1029/2010JA015679. Sonnerup, B. U. Ö., and H. Hasegawa (2010), On slowly evolving Grad-Shafranov equilibria, J. Geophys. Res., 115, A11218, doi:10.1029/2010JA015678. Magnetic field maps recovered from (a) the aliased (original) and (b) de-aliased (new) versions of the time evolution method. Colors show the out-of-plane (z) magnetic field component, and white arrows at points along y = 0 show the transverse velocities obtained from the reconstruction. The blue diamonds in panels (b) mark the location of the ACE spacecraft.

  15. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  16. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  17. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  18. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  19. A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage

    NASA Astrophysics Data System (ADS)

    Watson, F. E.; Doster, F.

    2017-12-01

    In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.

  20. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  1. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  2. New insights on stress rotations from a forward regional model of the San Andreas fault system near its Big Bend in southern California

    USGS Publications Warehouse

    Fitzenz, D.D.; Miller, S.A.

    2004-01-01

    Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.

  3. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  4. Three-dimensional models of deformation near strike-slip faults

    USGS Publications Warehouse

    ten Brink, Uri S.; Katzman, Rafael; Lin, J.

    1996-01-01

    We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the "shear zone." Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.

  5. Three-dimensional models of deformation near strike-slip faults

    USGS Publications Warehouse

    ten Brink, Uri S.; Katzman, Rafael; Lin, Jian

    1996-01-01

    We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the “shear zone.” Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.

  6. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  7. Modeling of coulpled deformation and permeability evolution during fault reactivation induced by deep underground injection of CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappa, F.; Rutqvist, J.

    2010-06-01

    The interaction between mechanical deformation and fluid flow in fault zones gives rise to a host of coupled hydromechanical processes fundamental to fault instability, induced seismicity, and associated fluid migration. In this paper, we discuss these coupled processes in general and describe three modeling approaches that have been considered to analyze fluid flow and stress coupling in fault-instability processes. First, fault hydromechanical models were tested to investigate fault behavior using different mechanical modeling approaches, including slip interface and finite-thickness elements with isotropic or anisotropic elasto-plastic constitutive models. The results of this investigation showed that fault hydromechanical behavior can be appropriatelymore » represented with the least complex alternative, using a finite-thickness element and isotropic plasticity. We utilized this pragmatic approach coupled with a strain-permeability model to study hydromechanical effects on fault instability during deep underground injection of CO{sub 2}. We demonstrated how such a modeling approach can be applied to determine the likelihood of fault reactivation and to estimate the associated loss of CO{sub 2} from the injection zone. It is shown that shear-enhanced permeability initiated where the fault intersects the injection zone plays an important role in propagating fault instability and permeability enhancement through the overlying caprock.« less

  8. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  9. Block rotations, fault domains and crustal deformation in the western US

    NASA Technical Reports Server (NTRS)

    Nur, Amos

    1990-01-01

    The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.

  10. Chip level modeling of LSI devices

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1984-01-01

    The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.

  11. Post-seismic and interseismic fault creep I: model description

    NASA Astrophysics Data System (ADS)

    Hetland, E. A.; Simons, M.; Dunham, E. M.

    2010-04-01

    We present a model of localized, aseismic fault creep during the full interseismic period, including both transient and steady fault creep, in response to a sequence of imposed coseismic slip events and tectonic loading. We consider the behaviour of models with linear viscous, non-linear viscous, rate-dependent friction, and rate- and state-dependent friction fault rheologies. Both the transient post-seismic creep and the pattern of steady interseismic creep rates surrounding asperities depend on recent coseismic slip and fault rheologies. In these models, post-seismic fault creep is manifest as pulses of elevated creep rates that propagate from the coseismic slip, these pulses feature sharper fronts and are longer lived in models with rate-state friction compared to other models. With small characteristic slip distances in rate-state friction models, interseismic creep is similar to that in models with rate-dependent friction faults, except for the earliest periods of post-seismic creep. Our model can be used to constrain fault rheologies from geodetic observations in cases where the coseismic slip history is relatively well known. When only considering surface deformation over a short period of time, there are strong trade-offs between fault rheology and the details of the imposed coseismic slip. Geodetic observations over longer times following an earthquake will reduce these trade-offs, while simultaneous modelling of interseismic and post-seismic observations provide the strongest constraints on fault rheologies.

  12. New learning based super-resolution: use of DWT and IGMRF prior.

    PubMed

    Gajjar, Prakash P; Joshi, Manjunath V

    2010-05-01

    In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.

  13. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  14. Response of deformation patterns to reorganizations of the southern San Andreas fault system since ca. 1.5 Ma

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.; Fattaruso, L.; Dorsey, R. J.; Housen, B. A.

    2015-12-01

    Between ~1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that growth of the San Jacinto fault led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of off-fault deformation and potential incipient faulting. These patterns support the notion of north-to-south propagation of the San Jacinto fault during its initiation. The results of the present-day model are compared with microseismicity focal mechanisms to provide additional insight into the patterns of off-fault deformation within the southern San Andreas fault system.

  15. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.

  16. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).

  17. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.

  18. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.

  19. 3D Model of the Tuscarora Geothermal Area

    DOE Data Explorer

    Faulds, James E.

    2013-12-31

    The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.

  20. Response of deformation patterns to reorganization of the southern San Andreas fault system since ca. 1.5 Ma

    NASA Astrophysics Data System (ADS)

    Fattaruso, Laura A.; Cooke, Michele L.; Dorsey, Rebecca J.; Housen, Bernard A.

    2016-12-01

    Between 1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault zone, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that initiation and growth of the San Jacinto fault zone led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical-axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the modeled fault evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of incipient faulting, and support the notion of north-to-south propagation of the San Jacinto fault during its initiation.

  1. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  2. Nearly frictionless faulting by unclamping in long-term interaction models

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  3. A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Finelli, George B.

    1987-01-01

    Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.

  4. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  5. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE PAGES

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  6. Dynamic modeling of gearbox faults: A review

    NASA Astrophysics Data System (ADS)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  7. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  8. Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars

    NASA Astrophysics Data System (ADS)

    Percy, J. R.

    2015-12-01

    Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.

  9. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  10. Stability of faults with heterogeneous friction properties and effective normal stress

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul

    2018-05-01

    Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.

  11. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  12. Studying the Effects of Transparent vs. Opaque Shallow Thrust Faults Using Synthetic P and SH Seismograms

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Aagaard, B. T.; Heaton, T. H.

    2001-12-01

    It has been hypothesized (Brune, 1996) that teleseismic inversions may underestimate the moment of shallow thrust fault earthquakes if energy becomes trapped in the hanging wall of the fault, i.e. if the fault boundary becomes opaque. We address this by creating and analyzing synthetic P and SH seismograms for a variety of friction models. There are a total of five models: (1) crack model (slip weakening) with instantaneous healing (2) crack model without healing (3) crack model with zero sliding friction (4) pulse model (slip and rate weakening) (5) prescribed model (Haskell-like rupture with the same final slip and peak slip-rate as model 4). Models 1-4 are all dynamic models where fault friction laws determine the rupture history. This allows feedback between the ongoing rupture and waves from the beginning of the rupture that hit the surface and reflect downwards. Hence, models 1-4 can exhibit opaque fault characteristics. Model 5, a prescribed rupture, allows for no interaction between the rupture and reflected waves, therefore, it is a transparent fault. We first produce source time functions for the different friction models by rupturing shallow thrust faults in 3-D dynamic finite-element simulations. The source time functions are used as point dislocations in a teleseismic body-wave code. We examine the P and SH waves for different azimuths and epicentral distances. The peak P and S first arrival displacement amplitudes for the crack, crack with healing and pulse models are all very similar. These dynamic models with opaque faults produce smaller peak P and S first arrivals than the prescribed, transparent fault. For example, a fault with strike = 90 degrees, azimuth = 45 degrees has P arrivals smaller by about 30% and S arrivals smaller by about 15%. The only dynamic model that doesn't fit this pattern is the crack model with zero sliding friction. It oscillates around its equilibrium position; therefore, it overshoots and yields an excessively large peak first arrival. In general, it appears that the dynamic, opaque faults have smaller peak teleseismic displacements that would lead to lower moment estimates by a modest amount.

  13. Interleaved EPI based fMRI improved by multiplexed sensitivity encoding (MUSE) and simultaneous multi-band imaging.

    PubMed

    Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high spatial-resolvability and fidelity will largely benefit from the reported techniques.

  14. Fault trees and sequence dependencies

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.

    1990-01-01

    One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.

  15. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.

  16. Numerical modelling of fault reactivation in carbonate rocks under fluid depletion conditions - 2D generic models with a small isolated fault

    NASA Astrophysics Data System (ADS)

    Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel

    2016-12-01

    This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.

  17. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared precut models with isotropic models to evaluate the trends of variability. Our results indicate that the discontinuities are reactivated especially when the tip of the newly-formed fault is either below or connected to them. During the stage of maximum activity along the precut, the faults slow down or even stop their propagation. The fault propagation systematically resumes when the angle between the fault and the precut is about 90° (critical angle); only during this stage the fault crosses the precut. The reactivation of the discontinuities induces an increase of the apical angle of the fault-related fold and produces wider limbs compared to the isotropic reference experiments.

  18. Strike-slip fault propagation and linkage via work optimization with application to the San Jacinto fault, California

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; McBeck, J.; Cooke, M. L.

    2013-12-01

    Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in efficiency provided by both hard-linkage and soft-linkage to be quantified and compared. Specialized models of interactions over the past 1 Ma between the Clark and Coyote Creek faults within the San Jacinto system reveal increasing mechanical efficiency as these fault structures change over time. Alongside this increasing efficiency is an increasing likelihood for single, larger earthquakes that rupture multiple fault segments. These models reinforce the sensitivity of mechanical efficiency to both fault structure and the regional tectonic stress orientation controlled by plate motions and provide insight into how slip may have been partitioned between the San Andreas and San Jacinto systems over the past 1 Ma.

  19. The Martian atmospheric planetary boundary layer stability, fluxes, spectra, and similarity

    NASA Technical Reports Server (NTRS)

    Tillman, James E.

    1994-01-01

    This is the first analysis of the high frequency data from the Viking lander and spectra of wind, in the Martian atmospheric surface layer, along with the diurnal variation of the height of the mixed surface layer, are calculated for the first time for Mars. Heat and momentum fluxes, stability, and z(sub O) are estimated for early spring, from a surface temperature model and from Viking Lander 2 temperatures and winds at 44 deg N, using Monin-Obukhov similarity theory. The afternoon maximum height of the mixed layer for these seasons and conditions is estimated to lie between 3.6 and 9.2 km. Estimations of this height is of primary importance to all models of the boundary layer and Martian General Circulation Models (GCM's). Model spectra for two measuring heights and three surface roughnesses are calculated using the depth of the mixed layer, and the surface layer parameters and flow distortion by the lander is also taken into account. These experiments indicate that z(sub O), probably lies between 1.0 and 3.0 cm, and most likely is closer to 1.0 cm. The spectra are adjusted to simulate aliasing and high frequency rolloff, the latter caused both by the sensor response and the large Kolmogorov length on Mars. Since the spectral models depend on the surface parameters, including the estimated surface temperature, their agreement with the calculated spectra indicates that the surface layer estimates are self consistent. This agreement is especially noteworthy in that the inertial subrange is virtually absent in the Martian atmosphere at this height, due to the large Kolmogorov length scale. These analyses extend the range of applicability of terrestrial results and demonstrate that it is possible to estimate the effects of severe aliasing of wind measurements, to produce a models which agree well with the measured spectra. The results show that similarity theory developed for Earth applies to Mars, and that the spectral models are universal.

  20. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    PubMed

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.

  1. Subsurface structural interpretation by applying trishear algorithm: An example from the Lenghu5 fold-and-thrust belt, Qaidam Basin, Northern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Pei, Yangwen; Paton, Douglas A.; Wu, Kongyou; Xie, Liujuan

    2017-08-01

    The application of trishear algorithm, in which deformation occurs in a triangle zone in front of a propagating fault tip, is often used to understand fault related folding. In comparison to kink-band methods, a key characteristic of trishear algorithm is that non-uniform deformation within the triangle zone allows the layer thickness and horizon length to change during deformation, which is commonly observed in natural structures. An example from the Lenghu5 fold-and-thrust belt (Qaidam Basin, Northern Tibetan Plateau) is interpreted to help understand how to employ trishear forward modelling to improve the accuracy of seismic interpretation. High resolution fieldwork data, including high-angle dips, 'dragging structures', thinning hanging-wall and thickening footwall, are used to determined best-fit trishear model to explain the deformation happened to the Lenghu5 fold-and-thrust belt. We also consider the factors that increase the complexity of trishear models, including: (a) fault-dip changes and (b) pre-existing faults. We integrate fault dip change and pre-existing faults to predict subsurface structures that are apparently under seismic resolution. The analogue analysis by trishear models indicates that the Lenghu5 fold-and-thrust belt is controlled by an upward-steepening reverse fault above a pre-existing opposite-thrusting fault in deeper subsurface. The validity of the trishear model is confirmed by the high accordance between the model and the high-resolution fieldwork. The validated trishear forward model provides geometric constraints to the faults and horizons in the seismic section, e.g., fault cutoffs and fault tip position, faults' intersecting relationship and horizon/fault cross-cutting relationship. The subsurface prediction using trishear algorithm can significantly increase the accuracy of seismic interpretation, particularly in seismic sections with low signal/noise ratio.

  2. Towards "realistic" fault zones in a 3D structure model of the Thuringian Basin, Germany

    NASA Astrophysics Data System (ADS)

    Kley, J.; Malz, A.; Donndorf, S.; Fischer, T.; Zehner, B.

    2012-04-01

    3D computer models of geological architecture are evolving into a standard tool for visualization and analysis. Such models typically comprise the bounding surfaces of stratigraphic layers and faults. Faults affect the continuity of aquifers and can themselves act as fluid conduits or barriers. This is one reason why a "realistic" representation of faults in 3D models is desirable. Still so, many existing models treat faults in a simplistic fashion, e.g. as vertical downward projections of fault traces observed at the surface. Besides being geologically and mechanically unreasonable, this also causes technical difficulties in the modelling workflow. Most natural faults are inclined and may change dips according to rock type or flatten into mechanically weak layers. Boreholes located close to a fault can therefore cross it at depth, resulting in stratigraphic control points allocated to the wrong block. Also, faults tend to split up into several branches, forming fault zones. Obtaining a more accurate representation of faults and fault zones is therefore challenging. We present work-in-progress from the Thuringian Basin in central Germany. The fault zone geometries are never fully constrained by data and must be extrapolated to depth. We use balancing of serial, parallel cross-sections to constrain subsurface extrapolations. The structure sections are checked for consistency by restoring them to an undeformed state. If this is possible without producing gaps or overlaps, the interpretation is considered valid (but not unique) for a single cross-section. Additional constraints are provided by comparison of adjacent cross-sections. Structures should change continuously from one section to another. Also, from the deformed and restored cross-sections we can measure the strain incurred during deformation. Strain should be compatible among the cross-sections: If at all, it should vary smoothly and systematically along a given fault zone. The stratigraphic contacts and faults in the resulting grid of parallel balanced sections are then interpolated into a gOcad model containing stratigraphic boundaries and faults as triangulated surfaces. The interpolation is also controlled by borehole data located off the sections and the surface traces of stratigraphic boundaries. We have written customized scripts to largely automatize this step, with particular attention to a seamless fit between stratigraphic surfaces and fault planes which share the same nodes and segments along their contacts. Additional attention was paid to the creation of a uniform triangulated grid with maximized angles. This ensures that uniform triangulated volumes can be created for further use in numerical flow modelling. An as yet unsolved problem is the implementation of the fault zones and their hydraulic properties in a large-scale model of the entire basin. Short-wavelength folds and subsidiary faults control which aquifers and seals are juxtaposed across the fault zones. It is impossible to include these structures in the regional model, but neglecting them would result in incorrect assessments of hydraulic links or barriers. We presently plan to test and calibrate the hydraulic properties of the fault zones in smaller, high-resolution models and then to implement geometrically simple "equivalent" fault zones with appropriate, variable transmissivities between specific aquifers.

  3. Modelling of hydrothermal fluid flow and structural architecture in an extensional basin, Ngakuru Graben, Taupo Rift, New Zealand

    NASA Astrophysics Data System (ADS)

    Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.

    2018-05-01

    Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.

  4. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    NASA Astrophysics Data System (ADS)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  5. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  6. A footwall system of faults associated with a foreland thrust in Montana

    NASA Astrophysics Data System (ADS)

    Watkinson, A. J.

    1993-05-01

    Some recent structural geology models of faulting have promoted the idea of a rigid footwall behaviour or response under the main thrust fault, especially for fault ramps or fault-bend folds. However, a very well-exposed thrust fault in the Montana fold and thrust belt shows an intricate but well-ordered system of subsidiary minor faults in the footwall position with respect to the main thrust fault plane. Considerable shortening has occurred off the main fault in this footwall collapse zone and the distribution and style of the minor faults accord well with published patterns of aftershock foci associated with thrust faults. In detail, there appear to be geometrically self-similar fault systems from metre length down to a few centimetres. The smallest sets show both slip and dilation. The slickensides show essentially two-dimensional displacements, and three slip systems were operative—one parallel to the bedding, and two conjugate and symmetric about the bedding (acute angle of 45-50°). A reconstruction using physical analogue models suggests one possible model for the evolution and sequencing of slip of the thrust fault system.

  7. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master normal fault illustrate how these secondary structures influence the deformation in ways that are similar to fault/fold geometry mapped in the western Grand Canyon. Specifically, synthetic faults amplify hanging wall bedding dips, antithetic faults reduce dips, and joints act to localize deformation. The distribution of aftershocks in the hanging wall of the Kozani-Grevena earthquake suggests that secondary structures may accommodate strains associated with slip on a master fault during postseismic deformation.

  8. Building a 3D faulted a priori model for stratigraphic inversion: Illustration of a new methodology applied on a North Sea field case study

    NASA Astrophysics Data System (ADS)

    Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel

    2018-07-01

    Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.

  9. Subsurface fault geometries in Southern California illuminated through Full-3D Seismic Waveform Tomography (F3DT)

    NASA Astrophysics Data System (ADS)

    Lee, En-Jui; Chen, Po

    2017-04-01

    More precise spatial descriptions of fault systems play an essential role in tectonic interpretations, deformation modeling, and seismic hazard assessments. The recent developed full-3D waveform tomography techniques provide high-resolution images and are able to image the material property differences across faults to assist the understanding of fault systems. In the updated seismic velocity model for Southern California, CVM-S4.26, many velocity gradients show consistency with surface geology and major faults defined in the Community Fault Model (CFM) (Plesch et al. 2007), which was constructed by using various geological and geophysical observations. In addition to faults in CFM, CVM-S4.26 reveals a velocity reversal mainly beneath the San Gabriel Mountain and Western Mojave Desert regions, which is correlated with the detachment structure that has also been found in other independent studies. The high-resolution tomographic images of CVM-S4.26 could assist the understanding of fault systems in Southern California and therefore benefit the development of fault models as well as other applications, such as seismic hazard analysis, tectonic reconstructions, and crustal deformation modeling.

  10. A dynamic fault tree model of a propulsion system

    NASA Technical Reports Server (NTRS)

    Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila

    2006-01-01

    We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.

  11. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  12. Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters

    NASA Astrophysics Data System (ADS)

    John, Sarah

    1998-01-01

    A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.

  13. Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa

    NASA Technical Reports Server (NTRS)

    Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.

    2012-01-01

    We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.

  14. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  15. Testing fault growth models with low-temperature thermochronology in the northwest Basin and Range, USA

    USGS Publications Warehouse

    Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.

    2016-01-01

    Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.

  16. Generalized analytic solutions and response characteristics of magnetotelluric fields on anisotropic infinite faults

    NASA Astrophysics Data System (ADS)

    Bing, Xue; Yicai, Ji

    2018-06-01

    In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.

  17. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  18. Postseismic viscoelastic deformation and stress. Part 2: Stress theory and computation; dependence of displacement, strain, and stress on fault parameters

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1979-01-01

    A viscoelastic model for deformation and stress associated with earthquakes is reported. The model consists of a rectangular dislocation (strike slip fault) in a viscoelastic layer (lithosphere) lying over a viscoelastic half space (asthenosphere). The time dependent surface stresses are analyzed. The model predicts that near the fault a significant fraction of the stress that was reduced during the earthquake is recovered by viscoelastic softening of the lithosphere. By contrast, the strain shows very little change near the fault. The model also predicts that the stress changes associated with asthenospheric flow extend over a broader region than those associated with lithospheric relaxation even though the peak value is less. The dependence of the displacements, stresses on fault parameters studied. Peak values of strain and stress drop increase with increasing fault height and decrease with fault depth. Under many circumstances postseismic strains and stresses show an increase with decreasing depth to the lithosphere-asthenosphere boundary. Values of the strain and stress at distant points from the fault increase with fault area but are relatively insensitive to fault depth.

  19. Development of the Elastic Rebound Strike-slip (ERS) Fault Model for Teaching Earthquake Science to Non-science Students

    NASA Astrophysics Data System (ADS)

    Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.

    2009-12-01

    The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.

  20. Model-Based Diagnostics for Propellant Loading Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.

    2011-01-01

    The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.

  1. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  2. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.

  3. Three-dimensional analysis of a faulted CO 2 reservoir using an Eshelby-Mori-Tanaka approach to rock elastic properties and fault permeability

    DOE PAGES

    Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.; ...

    2016-09-29

    This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less

  4. Three-dimensional analysis of a faulted CO 2 reservoir using an Eshelby-Mori-Tanaka approach to rock elastic properties and fault permeability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.

    This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less

  5. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  6. Analytic Study of Three-Dimensional Rupture Propagation in Strike-Slip Faulting with Analogue Models

    NASA Astrophysics Data System (ADS)

    Chan, Pei-Chen; Chu, Sheng-Shin; Lin, Ming-Lang

    2014-05-01

    Strike-slip faults are high angle (or nearly vertical) fractures where the blocks have moved along strike way (nearly horizontal). Overburden soil profiles across main faults of Strike-slip faults have revealed the palm and tulip structure characteristics. McCalpin (2005) has trace rupture propagation on overburden soil surface. In this study, we used different offset of slip sandbox model profiles to study the evolution of three-dimensional rupture propagation by strike -slip faulting. In strike-slip faults model, type of rupture propagation and width of shear zone (W) are primary affecting by depth of overburden layer (H), distances of fault slip (Sy). There are few research to trace of three-dimensional rupture behavior and propagation. Therefore, in this simplified sandbox model, investigate rupture propagation and shear zone with profiles across main faults when formation are affecting by depth of overburden layer and distances of fault slip. The investigators at the model included width of shear zone, length of rupture (L), angle of rupture (θ) and space of rupture. The surface results was follow the literature that the evolution sequence of failure envelope was R-faults, P-faults and Y-faults which are parallel to the basement fault. Comparison surface and profiles structure which were curved faces and cross each other to define 3-D rupture and width of shear zone. We found that an increase in fault slip could result in a greater width of shear zone, and proposed a W/H versus Sy/H relationship. Deformation of shear zone showed a similar trend as in the literature that the increase of fault slip resulted in the increase of W, however, the increasing trend became opposite after a peak (when Sy/H was 1) value of W was reached (small than 1.5). The results showed that the W width is limited at a constant value in 3-D models by strike-slip faulting. In conclusion, this study helps evaluate the extensions of the shear zone influenced regions for strike-slip faults.

  7. Modeling Crustal Deformation Due to the Landers, Hector Mine Earthquakes Using the SCEC Community Fault Model

    NASA Astrophysics Data System (ADS)

    Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.

    2006-12-01

    More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.

  8. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.

  9. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  10. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE PAGES

    Huang, C. -K.; Zeng, Y.; Wang, Y.; ...

    2016-10-01

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  11. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  12. Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C. -K.; Zeng, Y.; Wang, Y.

    The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less

  13. Surface morphology of active normal faults in hard rock: Implications for the mechanics of the Asal Rift, Djibouti

    NASA Astrophysics Data System (ADS)

    Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.

    2010-10-01

    Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.

  14. Growth trishear model and its application to the Gilbertown graben system, southwest Alabama

    USGS Publications Warehouse

    Jin, G.; Groshong, R.H.; Pashin, J.C.

    2009-01-01

    Fault-propagation folding associated with an upward propagating fault in the Gilbertown graben system is revealed by well-based 3-D subsurface mapping and dipmeter analysis. The fold is developed in the Selma chalk, which is an oil reservoir along the southern margin of the graben. Area-depth-strain analysis suggests that the Cretaceous strata were growth units, the Jurassic strata were pregrowth units, and the graben system is detached in the Louann Salt. The growth trishear model has been applied in this paper to study the evolution and kinematics of extensional fault-propagation folding. Models indicate that the propagation to slip (p/s) ratio of the underlying fault plays an important role in governing the geometry of the resulting extensional fault-propagation fold. With a greater p/s ratio, the fold is more localized in the vicinity of the propagating fault. The extensional fault-propagation fold in the Gilbertown graben is modeled by both a compactional and a non-compactional growth trishear model. Both models predict a similar geometry of the extensional fault-propagation fold. The trishear model with compaction best predicts the fold geometry. ?? 2008 Elsevier Ltd. All rights reserved.

  15. On the Retrieval of Geocenter Motion from Gravity Data

    NASA Astrophysics Data System (ADS)

    Rosat, S.; Mémin, A.; Boy, J. P.; Rogister, Y. J. G.

    2017-12-01

    The center of mass of the whole Earth, the so-called geocenter, is moving with respect to the Center of Mass of the solid Earth because of the loading exerted by the Earth's fluid layers on the solid crust. Space geodetic techniques tying satellites and ground stations (e.g. GNSS, SLR and DORIS) have been widely employed to estimate the geocenter motion. Harmonic degree-1 variations of the gravity field are associated to the geocenter displacement. We show that ground records of time-varying gravity from Superconducting Gravimeters (SGs) can be used to constrain the geocenter motion. Two major difficulties have to be tackled: (1) the sensitivity of surface gravimetric measurements to local mass changes, and in particular hydrological and atmospheric variabilities; (2) the spatial aliasing (spectral leakage) of spherical harmonic degrees higher than 1 induced by the under-sampling of station distribution. The largest gravity variations can be removed from the SG data by subtracting solid and oceanic tides as well as atmospheric and hydrologic effects using global models. However some hydrological signal may still remain. Since surface water content is well-modelled using GRACE observations, we investigate how the spatial aliasing in SG data can be reduced by employing GRACE solutions when retrieving geocenter motion. We show synthetic simulations using complete surface loading models together with GRACE solutions computed at SG stations. In order to retrieve the degree-one gravity variations that are associated with the geocenter motion, we use a multi-station stacking method that performs better than a classical spherical harmonic stacking when the station distribution is inhomogeneous. We also test the influence of the network configuration on the estimate of the geocenter motion. An inversion using SG and GRACE observations is finally presented and the results are compared with previous geocenter estimates.

  16. Temporal evolution of fault systems in the Upper Jurassic of the Central German Molasse Basin: case study Unterhaching

    NASA Astrophysics Data System (ADS)

    Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus

    2018-03-01

    The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.

  17. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  18. A fault‐based model for crustal deformation in the western United States based on a combined inversion of GPS and geologic inputs

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2017-01-01

    We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018  N·m/year">8.5×1018  N⋅m/year for California and the WUS outside California, respectively.

  19. Spectral element modelling of fault-plane reflections arising from fluid pressure distributions

    USGS Publications Warehouse

    Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.

    2007-01-01

    The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model under the assumption of weak scattering. This allows us to use the empirical relationships between density, velocity and effective stress from the South Eugene Island field to relate a slip interface to an amount of excess pore-pressure in a fault zone. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  20. Modeling and Fault Simulation of Propellant Filling System

    NASA Astrophysics Data System (ADS)

    Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo

    2012-05-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  1. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  2. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    NASA Astrophysics Data System (ADS)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new map-based structural restoration techniques, we find that the tear faults have distinct displacement patterns that distinguish them from conventional strike-slip faults and reflect their roles in accommodating displacement gradients within the fold-and-thrust belt.

  3. Color, contrast sensitivity, and the cone mosaic.

    PubMed Central

    Williams, D; Sekiguchi, N; Brainard, D

    1993-01-01

    This paper evaluates the role of various stages in the human visual system in the detection of spatial patterns. Contrast sensitivity measurements were made for interference fringe stimuli in three directions in color space with a psychophysical technique that avoided blurring by the eye's optics including chromatic aberration. These measurements were compared with the performance of an ideal observer that incorporated optical factors, such as photon catch in the cone mosaic, that influence the detection of interference fringes. The comparison of human and ideal observer performance showed that neural factors influence the shape as well as the height of the foveal contrast sensitivity function for all color directions, including those that involve luminance modulation. Furthermore, when optical factors are taken into account, the neural visual system has the same contrast sensitivity for isoluminant stimuli seen by the middle-wavelength-sensitive (M) and long-wavelength-sensitive (L) cones and isoluminant stimuli seen by the short-wavelength-sensitive (S) cones. Though the cone submosaics that feed these chromatic mechanisms have very different spatial properties, the later neural stages apparently have similar spatial properties. Finally, we review the evidence that cone sampling can produce aliasing distortion for gratings with spatial frequencies exceeding the resolution limit. Aliasing can be observed with gratings modulated in any of the three directions in color space we used. We discuss mechanisms that prevent aliasing in most ordinary viewing conditions. Images Fig. 1 Fig. 8 PMID:8234313

  4. Off-resonance suppression for multispectral MR imaging near metallic implants.

    PubMed

    den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens

    2015-01-01

    Metal artifact reduction in MRI within clinically feasible scan-times without through-plane aliasing. Existing metal artifact reduction techniques include view angle tilting (VAT), which resolves in-plane distortions, and multispectral imaging (MSI) techniques, such as slice encoding for metal artifact correction (SEMAC) and multi-acquisition with variable resonances image combination (MAVRIC), that further reduce image distortions, but significantly increase scan-time. Scan-time depends on anatomy size and anticipated total spectral content of the signal. Signals outside the anticipated spatial region may cause through-plane back-folding. Off-resonance suppression (ORS), using different gradient amplitudes for excitation and refocusing, is proposed to provide well-defined spatial-spectral selectivity in MSI to allow scan-time reduction and flexibility of scan-orientation. Comparisons of MSI techniques with and without ORS were made in phantom and volunteer experiments. Off-resonance suppressed SEMAC (ORS-SEMAC) and outer-region suppressed MAVRIC (ORS-MAVRIC) required limited through-plane phase encoding steps compared with original MSI. Whereas SEMAC (scan time: 5'46") and MAVRIC (4'12") suffered from through-plane aliasing, ORS-SEMAC and ORS-MAVRIC allowed alias-free imaging in the same scan-times. ORS can be used in MSI to limit the selected spatial-spectral region and contribute to metal artifact reduction in clinically feasible scan-times while avoiding slice aliasing. © 2014 Wiley Periodicals, Inc.

  5. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  6. Effect of time dependence on probabilistic seismic-hazard maps and deaggregation for the central Apennines, Italy

    USGS Publications Warehouse

    Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.

    2009-01-01

    We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.

  7. First Results from a Forward, 3-Dimensional Regional Model of a Transpressional San Andreas Fault System

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2001-12-01

    We present preliminary results from a 3-dimensional fault interaction model, with the fault system specified by the geometry and tectonics of the San Andreas Fault (SAF) system. We use the forward model for earthquake generation on interacting faults of Fitzenz and Miller [2001] that incorporates the analytical solutions of Okada [85,92], GPS-constrained tectonic loading, creep compaction and frictional dilatancy [Sleep and Blanpied, 1994, Sleep, 1995], and undrained poro-elasticity. The model fault system is centered at the Big Bend, and includes three large strike-slip faults (each discretized into multiple subfaults); 1) a 300km, right-lateral segment of the SAF to the North, 2) a 200km-long left-lateral segment of the Garlock fault to the East, and 3) a 100km-long right-lateral segment of the SAF to the South. In the initial configuration, three shallow-dipping faults are also included that correspond to the thrust belt sub-parallel to the SAF. Tectonic loading is decomposed into basal shear drag parallel to the plate boundary with a 35mm yr-1 plate velocity, and East-West compression approximated by a vertical dislocation surface applied at the far-field boundary resulting in fault-normal compression rates in the model space about 4mm yr-1. Our aim is to study the long-term seismicity characteristics, tectonic evolution, and fault interaction of this system. We find that overpressured faults through creep compaction are a necessary consequence of the tectonic loading, specifically where high normal stress acts on long straight fault segments. The optimal orientation of thrust faults is a function of the strike-slip behavior, and therefore results in a complex stress state in the elastic body. This stress state is then used to generate new fault surfaces, and preliminary results of dynamically generated faults will also be presented. Our long-term aim is to target measurable properties in or around fault zones, (e.g. pore pressures, hydrofractures, seismicity catalogs, stress orientation, surface strain, triggering, etc.), which may allow inferences on the stress state of fault systems.

  8. Experimental verification of the model for formation of double Shockley stacking faults in highly doped regions of PVT-grown 4H–SiC wafers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yu; Guo, Jianqiu; Goue, Ouloide

    Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verifiedmore » this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.« less

  9. Modelling the role of basement block rotation and strike-slip faulting on structural pattern in the cover units of fold-and-thrust belts

    NASA Astrophysics Data System (ADS)

    Koyi, Hemin; Nilfouroushan, Faramarz; Hessami, Khaled

    2015-04-01

    A series of scaled analogue models are run to study the degree of coupling between basement block kinematics and cover deformation. In these models, rigid basal blocks were rotated about vertical axis in a "bookshelf" fashion, which caused strike-slip faulting along the blocks and, to some degrees, in the overlying cover units of loose sand. Three different combinations of cover basement deformations are modeled; cover shortening prior to basement fault movement; basement fault movement prior to shortening of cover units; and simultaneous cover shortening with basement fault movement. Model results show that the effect of basement strike-slip faults depends on the timing of their reactivation during the orogenic process. Pre- and syn-orogen basement strike-slip faults have a significant impact on the structural pattern of the cover units, whereas post-orogenic basement strike-slip faults have less influence on the thickened hinterland of the overlying fold-and-thrust belt. The interaction of basement faulting and cover shortening results in formation of rhomb features. In models with pre- and syn-orogen basement strike-slip faults, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strike-slip faulting. These rhombic blocks, which have resemblance to flower structures, differ in kinematics, genesis and structural extent. They are bounded by strike-slip faults on two opposite sides and thrusts on the other two sides. In the models, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strke-slip faulting. Such rhomb features are recognized in the Alborz and Zagros fold-and-thrust belts where cover units are shortened simultaneously with strike-slip faulting in the basement. Model results are also compared with geodetic results obtained from combination of all available GPS velocities in the Zagros and Alborz FTBs. Geodetic results indicate domains of clockwise and anticlockwise rotation in these two FTBs. The typical pattern of structures and their spatial distributions are used to suggest clockwise block rotation of basement blocks about vertical axes and their associated strike-slip faulting in both west-central Alborz and the southeastern part of the Zagros fold-and-thrust belt.

  10. Determination of the relationship between major fault and zinc mineralization using fractal modeling in the Behabad fault zone, central Iran

    NASA Astrophysics Data System (ADS)

    Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang

    2017-10-01

    The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.

  11. Crustal Density Variation Along the San Andreas Fault Controls Its Secondary Faults Distribution and Dip Direction

    NASA Astrophysics Data System (ADS)

    Yang, H.; Moresi, L. N.

    2017-12-01

    The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient < 0.1) is requisite. To the first order, there is significant density difference between the Great Valley and the adjacent Mojave block. The Great Valley block is much colder and of larger density (>200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.

  12. Fault friction, regional stress, and crust-mantle coupling in southern California from finite element models

    NASA Technical Reports Server (NTRS)

    Bird, P.; Baumgardner, J.

    1984-01-01

    To determine the correct fault rheology of the Transverse Ranges area of California, a new finite element to represent faults and a mangle drag element are introduced into a set of 63 simulation models of anelastic crustal strain. It is shown that a slip rate weakening rheology for faults is not valid in California. Assuming that mantle drag effects on the crust's base are minimal, the optimal coefficient of friction in the seismogenic portion of the fault zones is 0.4-0.6 (less than Byerly's law assumed to apply elsewhere). Depending on how the southern California upper mantle seismic velocity anomaly is interpreted, model results are improved or degraded. It is found that the location of the mantle plate boundary is the most important secondary parameter, and that the best model is either a low-stress model (fault friction = 0.3) or a high-stress model (fault friction = 0.85), each of which has strong mantel drag. It is concluded that at least the fastest moving faults in southern California have a low friction coefficient (approximtely 0.3) because they contain low strength hydrated clay gouges throughout the low-temperature seismogenic zone.

  13. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  14. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  15. Modelling earthquake ruptures with dynamic off-fault damage

    NASA Astrophysics Data System (ADS)

    Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban

    2017-04-01

    Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for modelling earthquake ruptures. We then modelled earthquake ruptures allowing for coseismic off-fault damage with appropriate fracture nucleation and growth criteria. We studied the effect of different conditions such as rupture speed (sub-Rayleigh or supershear), the orientation of the initial maximum principal stress with respect to the fault and the magnitude of the initial stress (to mimic depth). The comparison between the sub-Rayleigh and supershear case shows that the coseismic off-fault damage is enhanced in the supershear case when compared with the sub-Rayleigh case. The orientation of the maximum principal stress also has significant difference such that the dynamic off-fault cracking is more likely to occur on the extensional side of the fault for high principal stress orientation. It is found that the coseismic off-fault damage reduces the rupture speed due to the dissipation of the energy by dynamic off-fault cracking generated in the vicinity of the rupture front. In terms of the ground motion amplitude spectra it is shown that the high-frequency radiation is enhanced by the coseismic off-fault damage though it is quickly attenuated. This is caused by the intricate superposition of the radiation generated by the off-fault damage and the perturbation of the rupture speed on the main fault.

  16. Continental deformation accommodated by non-rigid passive bookshelf faulting: An example from the Cenozoic tectonic development of northern Tibet

    NASA Astrophysics Data System (ADS)

    Zuza, Andrew V.; Yin, An

    2016-05-01

    Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.

  17. Signal conditioning units for vibration measurement in HUMS

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Liu, Tingting; Yu, Zirong; Chen, Lijuan; Huang, Xinjie

    2018-03-01

    A signal conditioning units for vibration measurement in HUMS is proposed in the paper. Due to the frequency of vibrations caused by components in helicopter are different, two steps amplifier and programmable anti-aliasing filter are designed to meet the measurement of different types of helicopter. Vibration signals are converted into measurable electrical signals combing with ICP driver firstly. Then pre-amplifier and programmable gain amplifier is applied to magnify the weak electrical signals. In addition, programmable anti-aliasing filter is utilized to filter the interference of noise. The units were tested using function signal generator and oscilloscope. The experimental results have demonstrated the effectiveness of our proposed method in quantitatively and qualitatively. The method presented in this paper can meet the measurement requirement for different types of helicopter.

  18. Striping artifact reduction in lunar orbiter mosaic images

    USGS Publications Warehouse

    Mlsna, P.A.; Becker, T.

    2006-01-01

    Photographic images of the moon from the 1960s Lunar Orbiter missions are being processed into maps for visual use. The analog nature of the images has produced numerous artifacts, the chief of which causes a vertical striping pattern in mosaic images formed from a series of filmstrips. Previous methods of stripe removal tended to introduce ringing and aliasing problems in the image data. This paper describes a recently developed alternative approach that succeeds at greatly reducing the striping artifacts while avoiding the creation of ringing and aliasing artifacts. The algorithm uses a one dimensional frequency domain step to deal with the periodic component of the striping artifact and a spatial domain step to handle the aperiodic residue. Several variations of the algorithm have been explored. Results, strengths, and remaining challenges are presented. ?? 2006 IEEE.

  19. A multiple fault rupture model of the November 13 2016, M 7.8 Kaikoura earthquake, New Zealand

    NASA Astrophysics Data System (ADS)

    Benites, R. A.; Francois-Holden, C.; Langridge, R. M.; Kaneko, Y.; Fry, B.; Kaiser, A. E.; Caldwell, T. G.

    2017-12-01

    The rupture-history of the November 13 2016 MW7.8 Kaikoura earthquake recorded by near- and intermediate-field strong-motion seismometers and 2 high-rate GPS stations reveals a complex cascade of multiple crustal fault rupture. In spite of such complexity, we show that the rupture history of each fault is well approximated by simple kinematic model with uniform slip and rupture velocity. Using 9 faults embedded in a crustal layer 19 km thick, each with a prescribed slip vector and rupture velocity, this model accurately reproduces the displacement waveforms recorded at the near-field strong-motion and GPS stations. This model includes the `Papatea Fault' with a mixed thrust and strike-slip mechanism based on in-situ geological observations with up to 8 m of uplift observed. Although the kinematic model fits the ground-motion at the nearest strong station, it doesn not reproduce the one sided nature of the static deformation field observed geodetically. This suggests a dislocation based approach does not completely capture the mechanical response of the Papatea Fault. The fault system as a whole extends for approximately 150 km along the eastern side of the Marlborough fault system in the South Island of New Zealand. The total duration of the rupture was 74 seconds. The timing and location of each fault's rupture suggests fault interaction and triggering resulting in a northward cascade crustal ruptures. Our model does not require rupture of the underlying subduction interface to explain the data.

  20. Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2014-11-01

    In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.

  1. The influence of fault geometry and frictional contact properties on slip surface behavior and off-fault damage: insights from quasi-static modeling of small strike-slip faults from the Sierra Nevada, CA

    NASA Astrophysics Data System (ADS)

    Ritz, E.; Pollard, D. D.

    2011-12-01

    Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.

  2. Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2014-01-01

    This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  3. Surface Morphology of Active Normal Faults in Hard Rock: Implications for the Mechanics of the Asal Rift, Djibouti

    NASA Astrophysics Data System (ADS)

    Pinzuti, P.; Mignan, A.; King, G. C.

    2009-12-01

    Mechanical stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localized magma injection, with normal faults accommodating extension and subsidence above the maximum reach of the magma column. In these magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Using mechanical and kinematics concepts and vertical profiles of normal fault scarps from an Asal Rift campaign, where normal faults are sub-vertical on surface level, we discuss the creation and evolution of normal faults in massive fractured rocks (basalt). We suggest that the observed fault scarps correspond to sub-vertical en echelon structures and that at greater depth, these scarps combine and give birth to dipping normal faults. Finally, the geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.

  4. How quickly do earthquakes get locked in the landscape? One year of erosion on El Mayor-Cucapah rupture scarps imaged by repeat terrestrial lidar scans

    NASA Astrophysics Data System (ADS)

    Elliott, A. J.; Oskin, M. E.; Banesh, D.; Gold, P. O.; Hinojosa-Corona, A.; Styron, R. H.; Taylor, M. H.

    2012-12-01

    Differencing repeat terrestrial lidar scans of the 2010 M7.2 El Mayor-Cucapah (EMC) earthquake rupture reveals the rapid onset of surface processes that simultaneously degrade and preserve evidence of coseismic fault rupture in the landscape and paleoseismic record. We surveyed fresh fault rupture two weeks after the 4 April 2010 earthquake, then repeated these surveys one year later. We imaged fault rupture through four substrates varying in degree of consolidation and scarp facing-direction, recording modification due to a range of aeolian, fluvial, and hillslope processes. Using lidar-derived DEM rasters to calculate the topographic differences between years results in aliasing errors because GPS uncertainty between years (~1.5cm) exceeds lidar point-spacing (<1.0cm) shifting the raster sampling of the point cloud. Instead, we coregister each year's scans by iteratively minimizing the horizontal and vertical misfit between neighborhoods of points in each raw point cloud. With the misfit between datasets minimized, we compute the vertical difference between points in each scan within a specified neighborhood. Differencing results reveal two variables controlling the type and extent of erosion: cohesion of the substrate controls the degree to which hillslope processes affect the scarp, while scarp facing direction controls whether more effective fluvial erosion can act on the scarp. In poorly consolidated materials, large portions (>50% along strike distance) of the scarp crest are eroded up to 5cm by a combination of aeolian abrasion and diffusive hillslope processes, such as rainsplash and mass-wasting, while in firmer substrate (i.e., bedrock mantled by fault gouge) there is no detectable hillslope erosion. On the other hand, where small gullies cross downhill-facing scarps (<5% along strike distance), fluvial erosion has caused 5-50cm of headward scarp retreat in bedrock. Thus, although aeolian and hillslope processes operate over a greater along-strike distance, fluvial processes concentrated in pre-existing bedrock gullies transport a far greater volume of material across the scarp. Substrate cohesiveness dictates the degree to which erosive processes act to relax the scarp (e.g., gravels erode more easily than bedrock). However, scarp locations that favor fluvial processes suffer rapid, localized erosion of vertical scarp faces, regardless of substrate. Differential lidar also reveals debris cones formed at the base of the scarp below locations of scarp crest erosion. These indicate the rapid growth of a colluvial wedge. Where a fissure occupies the base of the scarp we observe nearly complete in-filling by silt and sand moved by both mass wasting and fluvial deposition, indicating that fissure fills observed in paleoseismic trenches likely bracket the age of an earthquake to within one year. We find no evidence of differential postseismic tectonic deformation across the fault within the ~100m aperture of our surveys.

  5. Unilateral contact induced blade/casing vibratory interactions in impellers: Analysis for rigid casings

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Meingast, Markus; Legrand, Mathias

    2015-02-01

    This contribution addresses the vibratory analysis of unilateral-contact induced structural interactions between a bladed impeller and its surrounding rigid casing. Such assemblies can be found in helicopter or small aircraft engines for instance and the interactions of interest shall arise due to the always tighter operating clearances between the rotating and stationary components. The investigation is conducted by extending to cyclically symmetric structures an in-house time-marching based tool dedicated to unilateral contact occurrences in turbomachines. The main components of the considered impeller together with the associated assumptions and modelling principles considered in this work are detailed. Typical dynamical features of cyclically symmetric structures, such as the aliasing effect and frequency clustering are explored in this nonlinear framework by means of thorough frequency-domain analyses and harmonic trackings of the numerically predicted impeller displacements. Additional contact maps highlight the existence of critical rotational velocities at which displacements potentially reach high amplitudes due to the synchronization of the bladed assembly vibratory pattern with the shape of the rigid casing. The proposed numerical investigations are also compared to a simpler and (almost) empirical criterion: it is suggested, based on nonlinear numerical simulations with a linear reduced order model of the impeller and a rigid casing, that this criterion may miss important critical velocities emanating from the unfavorable combination of aliasing and contact-induced higher harmonics in the vibratory response of the impeller. Overall, this work suggests a way to enhance guidelines to improve the design of impellers in the context of nonlinear and nonsmooth dynamics.

  6. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction.

    PubMed

    Yang, Guang; Yu, Simiao; Dong, Hao; Slabaugh, Greg; Dragotti, Pier Luigi; Ye, Xujiong; Liu, Fangde; Arridge, Simon; Keegan, Jennifer; Guo, Yike; Firmin, David; Keegan, Jennifer; Slabaugh, Greg; Arridge, Simon; Ye, Xujiong; Guo, Yike; Yu, Simiao; Liu, Fangde; Firmin, David; Dragotti, Pier Luigi; Yang, Guang; Dong, Hao

    2018-06-01

    Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

  7. GRACE AOD1B Product Release 06: Long-Term Consistency and the Treatment of Atmospheric Tides

    NASA Astrophysics Data System (ADS)

    Dobslaw, Henryk; Bergmann-Wolf, Inga; Dill, Robert; Poropat, Lea; Flechtner, Frank

    2017-04-01

    The GRACE satellites orbiting the Earth at very low altitudes are affected by rapid changes in the Earth's gravity field caused by mass redistribution in atmosphere and oceans. To avoid temporal aliasing of such high-frequency variability into the final monthly-mean gravity fields, those effects are typically modelled during the numerical orbit integration by appling the 6-hourly GRACE Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) a priori model. In preparation of the next GRACE gravity field re-processing currently performed by the GRACE Science Data System, a new version of AOD1B has been calculated. The data-set is based on 3-hourly surface pressure anomalies from ECMWF that have been mapped to a common reference orography by means of ECMWF's mean sea-level pressure diagnostic. Atmospheric tides as well as the corresponding oceanic response at the S1, S2, S3, and L2 frequencies and its annual modulations have been fitted and removed in order to retain the non-tidal variability only. The data-set is expanded into spherical harmonics complete up to degree and order 180. In this contribution, we will demonstrate that AOD1B RL06 is now free from spurious jumps in the time-series related to occasional changes in ECMWF's operational numerical weather prediction system. We will also highlight the rationale for separating tidal signals from the AOD1B coefficients, and will finally discuss the current quality of the AOD1B forecasts that have been introduced very recently for GRACE quicklook or near-realtime applications.

  8. Source model and Coulomb stress change of 2017 Mw 6.5 Philippine (Ormoc) Earthquake revealed by SAR interferometry

    NASA Astrophysics Data System (ADS)

    Tsai, M. C.; Hu, J. C.; Yang, Y. H.; Hashimoto, M.; Aurelio, M.; Su, Z.; Escudero, J. A.

    2017-12-01

    Multi-sight and high spatial resolution interferometric SAR data enhances our ability for mapping detailed coseismic deformation to estimate fault rupture model and to infer the Coulomb stress change associated with a big earthquake. Here, we use multi-sight coseismic interferograms acquired by ALOS-2 and Sentinel-1A satellites to estimate the fault geometry and slip distribution on the fault plane of the 2017 Mw 6.5 Ormoc Earthquake in Leyte island of Philippine. The best fitting model predicts that the coseismic rupture occurs along a fault plane with strike of 325.8º and dip of 78.5ºE. This model infers that the rupture of 2017 Ormoc earthquake is dominated by left-lateral slip with minor dip-slip motion, consistent with the left-lateral strike-slip Philippine fault system. The fault tip has propagated to the ground surface, and the predicted coseismic slip on the surface is about 1 m located at 6.5 km Northeast of Kananga city. Significant slip is concentrated on the fault patches at depth of 0-8 km and an along-strike distance of 20 km with varying slip magnitude from 0.3 m to 2.3 m along the southwest segment of this seismogenic fault. Two minor coseismic fault patches are predicted underneath of the Tononan geothermal field and the creeping segment of the northwest portion of this seismogenic fault. This implies that the high geothermal gradient underneath of the Tongonan geothermal filed could prevent heated rock mass from the coseismic failure. The seismic moment release of our preferred fault model is 7.78×1018 Nm, equivalent to Mw 6.6 event. The Coulomb failure stress (CFS) calculated by the preferred fault model predicts significant positive CFS change on the northwest segment of the Philippine fault in Leyte Island which has coseismic slip deficit and is absent from aftershocks. Consequently, this segment should be considered to have increasing of risk for future seismic hazard.

  9. Distributed deformation and block rotation in 3D

    NASA Technical Reports Server (NTRS)

    Scotti, Oona; Nur, Amos; Estevez, Raul

    1990-01-01

    The authors address how block rotation and complex distributed deformation in the Earth's shallow crust may be explained within a stationary regional stress field. Distributed deformation is characterized by domains of sub-parallel fault-bounded blocks. In response to the contemporaneous activity of neighboring domains some domains rotate, as suggested by both structural and paleomagnetic evidence. Rotations within domains are achieved through the contemporaneous slip and rotation of the faults and of the blocks they bound. Thus, in regions of distributed deformation, faults must remain active in spite of their poor orientation in the stress field. The authors developed a model that tracks the orientation of blocks and their bounding faults during rotation in a 3D stress field. In the model, the effective stress magnitudes of the principal stresses (sigma sub 1, sigma sub 2, and sigma sub 3) are controlled by the orientation of fault sets in each domain. Therefore, adjacent fault sets with differing orientations may be active and may display differing faulting styles, and a given set of faults may change its style of motion as it rotates within a stationary stress regime. The style of faulting predicted by the model depends on a dimensionless parameter phi = (sigma sub 2 - sigma sub 3)/(sigma sub 1 - sigma sub 3). Thus, the authors present a model for complex distributed deformation and complex offset history requiring neither geographical nor temporal changes in the stress regime. They apply the model to the Western Transverse Range domain of southern California. There, it is mechanically feasible for blocks and faults to have experienced up to 75 degrees of clockwise rotation in a phi = 0.1 strike-slip stress regime. The results of the model suggest that this domain may first have accommodated deformation along preexisting NNE-SSW faults, reactivated as normal faults. After rotation, these same faults became strike-slip in nature.

  10. Using a coupled hydro-mechanical fault model to better understand the risk of induced seismicity in deep geothermal projects

    NASA Astrophysics Data System (ADS)

    Abe, Steffen; Krieger, Lars; Deckert, Hagen

    2017-04-01

    The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.

  11. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    NASA Technical Reports Server (NTRS)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  12. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  13. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  14. A mechanical model of the San Andreas fault and SAFOD Pilot Hole stress measurements

    USGS Publications Warehouse

    Chery, J.; Zoback, M.D.; Hickman, S.

    2004-01-01

    Stress measurements made in the SAFOD pilot hole provide an opportunity to study the relation between crustal stress outside the fault zone and the stress state within it using an integrated mechanical model of a transform fault loaded in transpression. The results of this modeling indicate that only a fault model in which the effective friction is very low (<0.1) through the seismogenic thickness of the crust is capable of matching stress measurements made in both the far field and in the SAFOD pilot hole. The stress rotation measured with depth in the SAFOD pilot hole (???28??) appears to be a typical feature of a weak fault embedded in a strong crust and a weak upper mantle with laterally variable heat flow, although our best model predicts less rotation (15??) than observed. Stress magnitudes predicted by our model within the fault zone indicate low shear stress on planes parallel to the fault but a very anomalous mean stress, approximately twice the lithostatic stress. Copyright 2004 by the American Geophysical Union.

  15. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  16. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    NASA Technical Reports Server (NTRS)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  17. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.

  18. Fault Geometry and Slip Distribution at Depth of the 1997 Mw 7.2 Zirkuh Earthquake: Contribution of Near-Field Displacement Data

    NASA Astrophysics Data System (ADS)

    Marchandon, Mathilde; Vergnolle, Mathilde; Sudhaus, Henriette; Cavalié, Olivier

    2018-02-01

    In this study, we reestimate the source model of the 1997 Mw 7.2 Zirkuh earthquake (northeastern Iran) by jointly optimizing intermediate-field Interferometry Synthetic Aperture Radar data and near-field optical correlation data using a two-step fault modeling procedure. First, we estimate the geometry of the multisegmented Abiz fault using a genetic algorithm. Then, we discretize the fault segments into subfaults and invert the data to image the slip distribution on the fault. Our joint-data model, although similar to the Interferometry Synthetic Aperture Radar-based model to the first order, highlights differences in the fault dip and slip distribution. Our preferred model is ˜80° west dipping in the northern part of the fault, ˜75° east dipping in the southern part and shows three disconnected high slip zones separated by low slip zones. The low slip zones are located where the Abiz fault shows geometric complexities and where the aftershocks are located. We interpret this rough slip distribution as three asperities separated by geometrical barriers that impede the rupture propagation. Finally, no shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits.

  19. How does damage affect rupture propagation across a fault stepover?

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.; Savage, H. M.

    2011-12-01

    We investigate the potential for fault damage to influence earthquake rupture at fault step-overs using a mechanical numerical model that explicitly includes the generation of cracks around faults. We compare the off-fault fracture patterns and slip profiles generated along faults with a variety of frictional slip-weakening distances and step-over geometry. Models with greater damage facilitate the transfer of slip to the second fault. Increasing separation and decreasing the overlap distance reduces the transfer of slip across the step over. This is consistent with observations of rupture stopping at step-over separation greater than 4 km (Wesnousky, 2006). In cases of slip transfer, rupture is often passed to the second fault before the damage zone cracks of the first fault reach the second fault. This implies that stresses from the damage fracture tips are transmitted elastically to the second fault to trigger the onset of slip along the second fault. Consequently, the growth of damage facilitates transfer of rupture from one fault to another across the step-over. In addition, the rupture propagates along the damage-producing fault faster than along the rougher fault that does not produce damage. While this result seems counter to our understanding that damage slows rupture propagation, which is documented in our models with pre-existing damage, these model results are suggesting an additional process. The slip along the newly created damage may unclamp portions of the fault ahead of the rupture and promote faster rupture. We simulate the M7.1 Hector Mine Earthquake and compare the generated fracture patterns to maps of surface damage. Because along with the detailed damage pattern, we also know the stress drop during the earthquake, we may begin to constrain parameters like the slip-weakening distance along portions of the faults that ruptured in the Hector Mine earthquake.

  20. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  1. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  2. Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data

    USGS Publications Warehouse

    Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.

    2010-01-01

    Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.

  3. A fault isolation method based on the incidence matrix of an augmented system

    NASA Astrophysics Data System (ADS)

    Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong

    2018-03-01

    A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.

  4. Fault latency in the memory - An experimental study on VAX 11/780

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1986-01-01

    Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

  5. A Discrete Element Modeling Approach to Exploring the Transition Between Fault-related Folding Styles

    NASA Astrophysics Data System (ADS)

    Hughes, A. N.; Benesh, N. P.; Alt, R. C., II; Shaw, J. H.

    2011-12-01

    Contractional fault-related folds form as stratigraphic layers of rock are deformed due to displacement on an underlying fault. Specifically, fault-bend folds form as rock strata are displaced over non-planar faults, and fault-propagation folds form at the tips of faults as they propagate upward through sedimentary layers. Both types of structures are commonly observed in fold and thrust belts and passive margin settings throughout the world. Fault-bend and fault-propagation folds are often seen in close proximity to each other, and kinematic analysis of some fault-related folds suggests that they have undergone a transition in structural style from fault-bend to fault-propagation folding during their deformational history. Because of the similarity in conditions in which both fault-bend and fault-propagation folds are found, the circumstances that promote the formation of one of these structural styles over the other is not immediately evident. In an effort to better understand this issue, we have investigated the role of mechanical and geometric factors in the transition between fault-bend folding and fault-propagation folding using a series of models developed with the discrete element method (DEM). The DEM models employ an aggregate of circular, frictional disks that incorporate bonding at particle contacts to represent the numerical stratigraphy. A vertical wall moving at a fixed velocity drives displacement of the hanging-wall section along a pre-defined fault ramp and detachment. We utilize this setup to study the transition between fault-bend and fault-propagation folding by varying mechanical strength, stratigraphic layering, fault geometries, and boundary conditions of the model. In most circumstances, displacement of the hanging-wall leads to the development of an emergent fold as the hanging-wall material passes across the fault bend. However, in other cases, an emergent fault propagates upward through the sedimentary section, associated with the development of a steep, narrow front-limb, characteristic of fault-propagation folding. We find that the boundary conditions imposed on the far wall of the model have the strongest influence on structural style, but that other factors, such as fault dip and mechanical strengths, play secondary roles. By testing a range of values for each of the parameters, we are able to identify the range of values under which the transition occurs. Additionally, we find that the transition between fault-bend and fault-propagation folding is gradual, with structures in the transitional regime showing evidence of each structural style during a portion of their history. The primary role that boundary conditions play in determining fault-related folding style implies that the growth of natural structures may be affected by the emergence of adjacent structures, or in distal variations in detachment strengths. We explore these relationships using natural examples from various fold-and-thrust belts.

  6. Achieving Agreement in Three Rounds With Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  7. Surveillance system and method having an operating mode partitioned fault classification model

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  8. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  9. Two-Dimensional Boundary Element Method Application for Surface Deformation Modeling around Lembang and Cimandiri Fault, West Java

    NASA Astrophysics Data System (ADS)

    Mahya, M. J.; Sanny, T. A.

    2017-04-01

    Lembang and Cimandiri fault are active faults in West Java that thread people near the faults with earthquake and surface deformation risk. To determine the deformation, GPS measurements around Lembang and Cimandiri fault was conducted then the data was processed to get the horizontal velocity at each GPS stations by Graduate Research of Earthquake and Active Tectonics (GREAT) Department of Geodesy and Geomatics Engineering Study Program, ITB. The purpose of this study is to model the displacement distribution as deformation parameter in the area along Lembang and Cimandiri fault using 2-dimensional boundary element method (BEM) using the horizontal velocity that has been corrected by the effect of Sunda plate horizontal movement as the input. The assumptions that used at the modeling stage are the deformation occurs in homogeneous and isotropic medium, and the stresses that acted on faults are in elastostatic condition. The results of modeling show that Lembang fault had left-lateral slip component and divided into two segments. A lineament oriented in southwest-northeast direction is observed near Tangkuban Perahu Mountain separating the eastern and the western segments of Lembang fault. The displacement pattern of Cimandiri fault shows that Cimandiri fault is divided into the eastern segment with right-lateral slip component and the western segment with left-lateral slip component separated by a northwest-southeast oriented lineament at the western part of Gede Pangrango Mountain. The displacement value between Lembang and Cimandiri fault is nearly zero indicating that Lembang and Cimandiri fault are not connected each other and this area is relatively safe for infrastructure development.

  10. Comparison of fault-related folding algorithms to restore a fold-and-thrust-belt

    NASA Astrophysics Data System (ADS)

    Brandes, Christian; Tanner, David

    2017-04-01

    Fault-related folding means the contemporaneous evolution of folds as a consequence of fault movement. It is a common deformation process in the upper crust that occurs worldwide in accretionary wedges, fold-and-thrust belts, and intra-plate settings, in either strike-slip, compressional, or extensional regimes. Over the last 30 years different algorithms have been developed to simulate the kinematic evolution of fault-related folds. All these models of fault-related folding include similar simplifications and limitations and use the same kinematic behaviour throughout the model (Brandes & Tanner, 2014). We used a natural example of fault-related folding from the Limón fold-and-thrust belt in eastern Costa Rica to test two different algorithms and to compare the resulting geometries. A thrust fault and its hanging-wall anticline were restored using both the trishear method (Allmendinger, 1998; Zehnder & Allmendinger, 2000) and the fault-parallel flow approach (Ziesch et al. 2014); both methods are widely used in academia and industry. The resulting hanging-wall folds above the thrust fault are restored in substantially different fashions. This is largely a function of the propagation-to-slip ratio of the thrust, which controls the geometry of the related anticline. Understanding the controlling factors for anticline evolution is important for the evaluation of potential hydrocarbon reservoirs and the characterization of fault processes. References: Allmendinger, R.W., 1998. Inverse and forward numerical modeling of trishear fault propagation folds. Tectonics, 17, 640-656. Brandes, C., Tanner, D.C. 2014. Fault-related folding: a review of kinematic models and their application. Earth Science Reviews, 138, 352-370. Zehnder, A.T., Allmendinger, R.W., 2000. Velocity field for the trishear model. Journal of Structural Geology, 22, 1009-1014. Ziesch, J., Tanner, D.C., Krawczyk, C.M. 2014. Strain associated with the fault-parallel flow algorithm during kinematic fault displacement. Mathematical Geosciences, 46(1), 59-73.

  11. Deformation pattern during normal faulting: A sequential limit analysis

    NASA Astrophysics Data System (ADS)

    Yuan, X. P.; Maillot, B.; Leroy, Y. M.

    2017-02-01

    We model in 2-D the formation and development of half-graben faults above a low-angle normal detachment fault. The model, based on a "sequential limit analysis" accounting for mechanical equilibrium and energy dissipation, simulates the incremental deformation of a frictional, cohesive, and fluid-saturated rock wedge above the detachment. Two modes of deformation, gravitational collapse and tectonic collapse, are revealed which compare well with the results of the critical Coulomb wedge theory. We additionally show that the fault and the axial surface of the half-graben rotate as topographic subsidence increases. This progressive rotation makes some of the footwall material being sheared and entering into the hanging wall, creating a specific region called foot-to-hanging wall (FHW). The model allows introducing additional effects, such as weakening of the faults once they have slipped and sedimentation in their hanging wall. These processes are shown to control the size of the FHW region and the number of fault-bounded blocks it eventually contains. Fault weakening tends to make fault rotation more discontinuous and this results in the FHW zone containing multiple blocks of intact material separated by faults. By compensating the topographic subsidence of the half-graben, sedimentation tends to slow the fault rotation and this results in the reduction of the size of the FHW zone and of its number of fault-bounded blocks. We apply the new approach to reproduce the faults observed along a seismic line in the Southern Jeanne d'Arc Basin, Grand Banks, offshore Newfoundland. There, a single block exists in the hanging wall of the principal fault. The model explains well this situation provided that a slow sedimentation rate in the Lower Jurassic is proposed followed by an increasing rate over time as the main detachment fault was growing.

  12. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  13. Detection and diagnosis of bearing and cutting tool faults using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Boutros, Tony; Liang, Ming

    2011-08-01

    Over the last few decades, the research for new fault detection and diagnosis techniques in machining processes and rotating machinery has attracted increasing interest worldwide. This development was mainly stimulated by the rapid advance in industrial technologies and the increase in complexity of machining and machinery systems. In this study, the discrete hidden Markov model (HMM) is applied to detect and diagnose mechanical faults. The technique is tested and validated successfully using two scenarios: tool wear/fracture and bearing faults. In the first case the model correctly detected the state of the tool (i.e., sharp, worn, or broken) whereas in the second application, the model classified the severity of the fault seeded in two different engine bearings. The success rate obtained in our tests for fault severity classification was above 95%. In addition to the fault severity, a location index was developed to determine the fault location. This index has been applied to determine the location (inner race, ball, or outer race) of a bearing fault with an average success rate of 96%. The training time required to develop the HMMs was less than 5 s in both the monitoring cases.

  14. Thin‐ or thick‐skinned faulting in the Yakima fold and thrust belt (WA)? Constraints from kinematic modeling of the saddle mountains anticline

    USGS Publications Warehouse

    Casale, Gabriele; Pratt, Thomas L.

    2015-01-01

    The Yakima fold and thrust belt (YFTB) deforms the Columbia River Basalt Group flows of Washington State. The YFTB fault geometries and slip rates are crucial parameters for seismic‐hazard assessments of nearby dams and nuclear facilities, yet there are competing models for the subsurface fault geometry involving shallowly rooted versus deeply rooted fault systems. The YFTB is also thought to be analogous to the evenly spaced wrinkle ridges found on other terrestrial planets. Using seismic reflection data, borehole logs, and surface geologic data, we tested two proposed kinematic end‐member thick‐ and thin‐skinned fault models beneath the Saddle Mountains anticline of the YFTB. Observed subsurface geometry can be produced by 600–800 m of heave along a single listric‐reverse fault or ∼3.5  km of slip along two superposed low‐angle thrust faults. Both models require decollement slip between 7 and 9 km depth, resulting in greater fault areas than sometimes assumed in hazard assessments. Both models require initial slip much earlier than previously thought and may provide insight into the subsurface geometry of analogous comparisons to wrinkle ridges observed on other planets.

  15. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  16. Ductile bookshelf faulting: A new kinematic model for Cenozoic deformation in northern Tibet

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.

    2013-12-01

    It has been long recognized that the most dominant features on the northern Tibetan Plateau are the >1000 km left-slip strike-slip faults (e.g., the Atyn Tagh, Kunlun, and Haiyuan faults). Early workers used the presence of these faults, especially the Kunlun and Haiyuan faults, as evidence for eastward lateral extrusion of the plateau, but their low documented offsets--100s of km or less--can not account for the 2500 km of convergence between India and Asia. Instead, these faults may result from north-south right-lateral simple shear due to the northward indentation of India, which leads to the clockwise rotation of the strike-slip faults and left-lateral slip (i.e., bookshelf faulting). With this idea, deformation is still localized on discrete fault planes, and 'microplates' or blocks rotate and/or translate with little internal deformation. As significant internal deformation occurs across northern Tibet within strike-slip-bounded domains, there is need for a coherent model to describe all of the deformational features. We also note the following: (1) geologic offsets and Quaternary slip rates of both the Kunlun and Haiyuan faults vary along strike and appear to diminish to the east, (2) the faults appear to kinematically link with thrust belts (e.g., Qilian Shan, Liupan Shan, Longmen Shan, and Qimen Tagh) and extensional zones (e.g., Shanxi, Yinchuan, and Qinling grabens), and (3) temporal relationships between the major deformation zones and the strike-slip faults (e.g., simultaneous enhanced deformation and offset in the Qilian Shan and Liupan Shan, and the Haiyuan fault, at 8 Ma). We propose a new kinematic model to describe the active deformation in northern Tibet: a ductile-bookshelf-faulting model. With this model, right-lateral simple shear leads to clockwise vertical axis rotation of the Qaidam and Qilian blocks, and left-slip faulting. This motion creates regions of compression and extension, dependent on the local boundary conditions (e.g., rigid Tarim vs. eastern China moving eastward relative to Eurasia), which results in the development of thrust and extensional belts. These zones heterogeneously deform the wall-rock of the major strike-slip faults, causing the faults to stretch (an idea described by W.D. Means 1989 GEOLOGY). This effect is further enhanced by differential fault rotation, leading to more slip in the west, where the effect of India's indentation is more pronounced, than in the east. To investigate the feasibility of this model, we have examined geologic offsets, Quaternary fault slip rates, and GPS velocities, both from existing literature and our own observations. We compare offsets with the estimated shortening and extensional strain in the wall-rocks of the strike-slip faults. For example, if this model is valid, the slip on the eastern segment of the Haiyuan fault (i.e., ~25 km) should be compatible with shortening in the Liupan Shan and extension in the Yinchuan graben. We also present simple analogue model experiments to document the strain accumulated in bookshelf fault systems under different initial and boundary conditions (e.g., rigid vs. free vs. moving boundaries, heterogeneous or homogenous materials, variable strain rates). Comparing these experimentally derived strain distributions with those observed within the plateau can help elucidate which factors dominantly control regional deformation.

  17. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  18. Fault geometry and slip distribution of the 2008 Mw 7.9 Wenchuan, China earthquake, inferred from GPS and InSAR measurements

    NASA Astrophysics Data System (ADS)

    Wan, Yongge; Shen, Zheng-Kang; Bürgmann, Roland; Sun, Jianbao; Wang, Min

    2017-02-01

    We revisit the problem of coseismic rupture of the 2008 Mw7.9 Wenchuan earthquake. Precise determination of the fault structure and slip distribution provides critical information about the mechanical behaviour of the fault system and earthquake rupture. We use all the geodetic data available, craft a more realistic Earth structure and fault model compared to previous studies, and employ a nonlinear inversion scheme to optimally solve for the fault geometry and slip distribution. Compared to a homogeneous elastic half-space model and laterally uniform layered models, adopting separate layered elastic structure models on both sides of the Beichuan fault significantly improved data fitting. Our results reveal that: (1) The Beichuan fault is listric in shape, with near surface fault dip angles increasing from ˜36° at the southwest end to ˜83° at the northeast end of the rupture. (2) The fault rupture style changes from predominantly thrust at the southwest end to dextral at the northeast end of the fault rupture. (3) Fault slip peaks near the surface for most parts of the fault, with ˜8.4 m thrust and ˜5 m dextral slip near Hongkou and ˜6 m thrust and ˜8.4 m dextral slip near Beichuan, respectively. (4) The peak slips are located around fault geometric complexities, suggesting that earthquake style and rupture propagation were determined by fault zone geometric barriers. Such barriers exist primarily along restraining left stepping discontinuities of the dextral-compressional fault system. (5) The seismic moment released on the fault above 20 km depth is 8.2×1021 N m, corresponding to an Mw7.9 event. The seismic moments released on the local slip concentrations are equivalent to events of Mw7.5 at Yingxiu-Hongkou, Mw7.3 at Beichuan-Pingtong, Mw7.2 near Qingping, Mw7.1 near Qingchuan, and Mw6.7 near Nanba, respectively. (6) The fault geometry and kinematics are consistent with a model in which crustal deformation at the eastern margin of the Tibetan plateau is decoupled by differential motion across a decollement in the mid crust, above which deformation is dominated by brittle reverse faulting and below which deformation occurs by viscous horizontal shortening and vertical thickening.

  19. Quantifying Vertical Exhumation in Intracontinental Strike-Slip Faults: the Garlock fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Chinn, L.; Blythe, A. E.; Fendick, A.

    2012-12-01

    New apatite fission-track ages show varying rates of vertical exhumation at the eastern terminus of the Garlock fault zone. The Garlock fault zone is a 260 km long east-northeast striking strike-slip fault with as much as 64 km of sinistral offset. The Garlock fault zone terminates in the east in the Avawatz Mountains, at the intersection with the dextral Southern Death Valley fault zone. Although motion along the Garlock fault west of the Avawatz Mountains is considered purely strike-slip, uplift and exhumation of bedrock in the Avawatz Mountains south of the Garlock fault, as recently as 5 Ma, indicates that transpression plays an important role at this location and is perhaps related to a restricting bend as the fault wraps around and terminates southeastward along the Avawatz Mountains. In this study we complement extant thermochronometric ages from within the Avawatz core with new low temperature fission-track ages from samples collected within the adjacent Garlock and Southern Death Valley fault zones. These thermochronometric data indicate that vertical exhumation rates vary within the fault zone. Two Miocene ages (10.2 (+5.0/-3.4) Ma, 9.0 (+2.2/-1.8) Ma) indicate at least ~3.3 km of vertical exhumation at ~0.35 mm/yr, assuming a 30°C/km geothermal gradient, along a 2 km transect parallel and adjacent to the Mule Spring fault. An older Eocene age (42.9 (+8.7/-7.3) Ma) indicates ~3.3 km of vertical exhumation at ~0.08 mm/yr. These results are consistent with published exhumation rates of 0.35 mm/yr between ~7 and ~4 Ma and 0.13 mm/yr between ~15 and ~9 Ma, as determined by apatite fission-track and U-Th/He thermochronometry in the hanging-wall of the Mule Spring fault. Similar exhumation rates on both sides of the Mule Spring fault support three separate models: 1) Thrusting is no longer active along the Mule Spring fault, 2) Faulting is dominantly strike-slip at the sample locations, or 3) Miocene-present uplift and exhumation is below detection levels using apatite fission-track thermochronometry. In model #1 slip on the Mule Spring fault may have propagated towards the range front, and may be responsible for the fault-propagation-folding currently observed along the northern branch of the Southern Death Valley fault zone. Model #2 may serve to determine where faulting has historically included a component of thrust faulting to the east of sample locations. Model #3 would further determine total offset along the Mule Spring fault from Miocene-present. Anticipated fission-track and U-Th/He data will help distinguish between these alternative models.

  20. Investigation of growth fault bend folding using discrete element modeling: Implications for signatures of active folding above blind thrust faults

    NASA Astrophysics Data System (ADS)

    Benesh, N. P.; Plesch, A.; Shaw, J. H.; Frost, E. K.

    2007-03-01

    Using the discrete element modeling method, we examine the two-dimensional nature of fold development above an anticlinal bend in a blind thrust fault. Our models were composed of numerical disks bonded together to form pregrowth strata overlying a fixed fault surface. This pregrowth package was then driven along the fault surface at a fixed velocity using a vertical backstop. Additionally, new particles were generated and deposited onto the pregrowth strata at a fixed rate to produce sequential growth layers. Models with and without mechanical layering were used, and the process of folding was analyzed in comparison with fold geometries predicted by kinematic fault bend folding as well as those observed in natural settings. Our results show that parallel fault bend folding behavior holds to first order in these models; however, a significant decrease in limb dip is noted for younger growth layers in all models. On the basis of comparisons to natural examples, we believe this deviation from kinematic fault bend folding to be a realistic feature of fold development resulting from an axial zone of finite width produced by materials with inherent mechanical strength. These results have important implications for how growth fold structures are used to constrain slip and paleoearthquake ages above blind thrust faults. Most notably, deformation localized about axial surfaces and structural relief across the fold limb seem to be the most robust observations that can readily constrain fault activity and slip. In contrast, fold limb width and shallow growth layer dips appear more variable and dependent on mechanical properties of the strata.

  1. Learning in the model space for cognitive fault diagnosis.

    PubMed

    Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin

    2014-01-01

    The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.

  2. The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, Å.; Biggs, J.

    2018-01-01

    The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.

  3. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  4. Stresses, deformation, and seismic events on scaled experimental faults with heterogeneous fault segments and comparison to numerical modeling

    NASA Astrophysics Data System (ADS)

    Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.

    2017-04-01

    Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means to compare the scaled experiment to modeling results. In a finite-element model we reproduce the laboratory experiments, and compare the modeled stresses and strains to the observations and we compare the nucleation of seismic instability to the location of acoustic emissions. The model aids in understanding how the stresses and strains may vary as a result of fault heterogeneity, but also as a result of the boundary conditions inherent to a laboratory setup. The scaled experimental setup and modeling results also provide a means explain and compare with observations made at a larger scale, for example geodetic and seismological measurements along crustal scale faults.

  5. A-Priori Rupture Models for Northern California Type-A Faults

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.

  6. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. Implementation of a model based fault detection and diagnosis for actuation faults of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1992-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  8. 3D Model of the Neal Hot Springs Geothermal Area

    DOE Data Explorer

    Faulds, James E.

    2013-12-31

    The Neal Hot Springs geothermal system lies in a left-step in a north-striking, west-dipping normal fault system, consisting of the Neal Fault to the south and the Sugarloaf Butte Fault to the north (Edwards, 2013). The Neal Hot Springs 3D geologic model consists of 104 faults and 13 stratigraphic units. The stratigraphy is sub-horizontal to dipping <10 degrees and there is no predominant dip-direction. Geothermal production is exclusively from the Neal Fault south of, and within the step-over, while geothermal injection is into both the Neal Fault to the south of the step-over and faults within the step-over.

  9. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  10. Receptoral and Neural Aliasing.

    DTIC Science & Technology

    1993-01-30

    standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits

  11. On the implementation of faults in finite-element glacial isostatic adjustment models

    NASA Astrophysics Data System (ADS)

    Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.

    2014-01-01

    Stresses induced in the crust and mantle by continental-scale ice sheets during glaciation have triggered earthquakes along pre-existing faults, commencing near the end of the deglaciation. In order to get a better understanding of the relationship between glacial loading/unloading and fault movement due to the spatio-temporal evolution of stresses, a commonly used model for glacial isostatic adjustment (GIA) is extended by including a fault structure. Solving this problem is enabled by development of a workflow involving three cascaded finite-element simulations. Each step has identical lithospheric and mantle structure and properties, but evolving stress conditions along the fault. The purpose of the first simulation is to compute the spatio-temporal evolution of rebound stress when the fault is tied together. An ice load with a parabolic profile and simple ice history is applied to represent glacial loading of the Laurentide Ice Sheet. The results of the first step describe the evolution of the stress and displacement induced by the rebound process. The second step in the procedure augments the results of the first, by computing the spatio-temporal evolution of total stress (i.e. rebound stress plus tectonic background stress and overburden pressure) and displacement with reaction forces that can hold the model in equilibrium. The background stress is estimated by assuming that the fault is in frictional equilibrium before glaciation. The third step simulates fault movement induced by the spatio-temporal evolution of total stress by evaluating fault stability in a subroutine. If the fault remains stable, no movement occurs; in case of fault instability, the fault displacement is computed. We show an example of fault motion along a 45°-dipping fault at the ice-sheet centre for a two-dimensional model. Stable conditions along the fault are found during glaciation and the initial part of deglaciation. Before deglaciation ends, the fault starts to move, and fault offsets of up to 22 m are obtained. A fault scarp at the surface of 19.74 m is determined. The fault is stable in the following time steps with a high stress accumulation at the fault tip. Along the upper part of the fault, GIA stresses are released in one earthquake.

  12. The role of elasticity in simulating long-term tectonic extension

    NASA Astrophysics Data System (ADS)

    Olive, Jean-Arthur; Behn, Mark D.; Mittelstaedt, Eric; Ito, Garrett; Klein, Benjamin Z.

    2016-05-01

    While elasticity is a defining characteristic of the Earth's lithosphere, it is often ignored in numerical models of long-term tectonic processes in favour of a simpler viscoplastic description. Here we assess the consequences of this assumption on a well-studied geodynamic problem: the growth of normal faults at an extensional plate boundary. We conduct 2-D numerical simulations of extension in elastoplastic and viscoplastic layers using a finite difference, particle-in-cell numerical approach. Our models simulate a range of faulted layer thicknesses and extension rates, allowing us to quantify the role of elasticity on three key observables: fault-induced topography, fault rotation, and fault life span. In agreement with earlier studies, simulations carried out in elastoplastic layers produce rate-independent lithospheric flexure accompanied by rapid fault rotation and an inverse relationship between fault life span and faulted layer thickness. By contrast, models carried out with a viscoplastic lithosphere produce results that may qualitatively resemble the elastoplastic case, but depend strongly on the product of extension rate and layer viscosity U × ηL. When this product is high, fault growth initially generates little deformation of the footwall and hanging wall blocks, resulting in unrealistic, rigid block-offset in topography across the fault. This configuration progressively transitions into a regime where topographic decay associated with flexure is fully accommodated within the numerical domain. In addition, high U × ηL favours the sequential growth of multiple short-offset faults as opposed to a large-offset detachment. We interpret these results by comparing them to an analytical model for the fault-induced flexure of a thin viscous plate. The key to understanding the viscoplastic model results lies in the rate-dependence of the flexural wavelength of a viscous plate, and the strain rate dependence of the force increase associated with footwall and hanging wall bending. This behaviour produces unrealistic deformation patterns that can hinder the geological relevance of long-term rifting models that assume a viscoplastic rheology.

  13. Three-dimensional characterization of microporosity and permeability in fault zones hosted in heterolithic succession

    NASA Astrophysics Data System (ADS)

    Riegel, H. B.; Zambrano, M.; Jablonska, D.; Emanuele, T.; Agosta, F.; Mattioni, L.; Rustichelli, A.

    2017-12-01

    The hydraulic properties of fault zones depend upon the individual contributions of the damage zone and the fault core. In the case of the damage zone, it is generally characterized by means of fracture analysis and modelling implementing multiple approaches, for instance the discrete fracture network model, the continuum model, and the channel network model. Conversely, the fault core is more difficult to characterize because it is normally composed of fine grain material generated by friction and wear. If the dimensions of the fault core allows it, the porosity and permeability are normally studied by means of laboratory analysis or in the other case by two dimensional microporosity analysis and in situ measurements of permeability (e.g. micro-permeameter). In this study, a combined approach consisting of fracture modeling, three-dimensional microporosity analysis, and computational fluid dynamics was applied to characterize the hydraulic properties of fault zones. The studied fault zones crosscut a well-cemented heterolithic succession (sandstone and mudstones) and may vary in terms of fault core thickness and composition, fracture properties, kinematics (normal or strike-slip), and displacement. These characteristics produce various splay and fault core behavior. The alternation of sandstone and mudstone layers is responsible for the concurrent occurrence of brittle (fractures) and ductile (clay smearing) deformation. When these alternating layers are faulted, they produce corresponding fault cores which act as conduits or barriers for fluid migration. When analyzing damage zones, accurate field and data acquisition and stochastic modeling was used to determine the hydraulic properties of the rock volume, in relation to the surrounding, undamaged host rock. In the fault cores, the three-dimensional pore network quantitative analysis based on X-ray microtomography images includes porosity, pore connectivity, and specific surface area. In addition, images were used to perform computational fluid simulation (Lattice-Boltzmann multi relaxation time method) and estimate the permeability. These results will be useful for understanding the deformation process and hydraulic properties across meter-scale damage zones.

  14. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  15. Faulting mechanism of the El Asnam (Algeria) 1954 and 1980 earthquakes from modelling of vertical movements

    NASA Astrophysics Data System (ADS)

    Bezzeghoud, M.; Dimitro, D.; Ruegg, J. C.; Lammali, K.

    1995-09-01

    Since 1980, most of the papers published on the El Asnam earthquake concern the geological and seismological aspects of the fault zone. Only one paper, published by Ruegg et al. (1982), constrains the faulting mechanism with geodetic measurements. The purpose of this paper is to reexamine the faulting mechanism of the 1954 and 1980 events by modelling the associated vertical movements. For this purpose we used all available data, and particularly those of the levelling profiles along the Algiers-Oran railway that has been remeasured after each event. The comparison between 1905 and 1976 levelling data shows observed vertical displacement that could have been induced by the 1954 earthquake. On the basis of the 1954 and 1980 levelling data, we propose a possible model for the 1954 and 1980 fault systems. Our 1954 fault model is parallel to the 1980 main thrust fault, with an offset of 6 km towards the west. The 1980 dislocation model proposed in this study is based on a variable slip dislocation model and explains the observed surface break displacements given by Yielding et al. (1981). The Dewey (1991) and Avouac et al. (1992) models are compared with our dislocation model and discussed in this paper.

  16. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    PubMed

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  17. Use of fault striations and dislocation models to infer tectonic shear stress during the 1995 Hyogo-Ken Nanbu (Kobe) earthquake

    USGS Publications Warehouse

    Spudich, P.; Guatteri, Mariagiovanna; Otsuki, K.; Minagawa, J.

    1998-01-01

    Dislocation models of the 1995 Hyogo-ken Nanbu (Kobe) earthquake derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear traction on the fault before the earthquake origin time, can be derived at points on the fault where the slip rake rotates with time if slip velocity and stress change are known at these points. From Yoshida's slip model, we calculated dynamic stress changes on the ruptured fault surfaces. To estimate errors, we compared the slip velocities and dynamic stress changes of several published models of the earthquake. The differences between these models had an exponential distribution, not gaussian. We developed a Bayesian method to estimate the probability density function (PDF) of initial stress from the striations and from Yoshida's slip model. Striations near Toshima and Hirabayashi give initial stresses of about 13 and 7 MPa, respectively. We obtained initial stresses of about 7 to 17 MPa at depths of 2 to 10 km on a subset of points on the Nojima and Rokko fault systems. Our initial stresses and coseismic stress changes agree well with postearthquake stresses measured by hydrofracturing in deep boreholes near Hirabayashi and Ogura on Awaji Island. Our results indicate that the Nojima fault slipped at very low shear stress, and fractional stress drop was complete near the surface and about 32% below depths of 2 km. Our results at depth depend on the accuracy of the rake rotations in Yoshida's model, which are probably correct on the Nojima fault but debatable on the Rokko fault. Our results imply that curved or cross-cutting fault striations can be formed in a single earthquake, contradicting a common assumption of structural geology.

  18. Kinematics of the New Madrid seismic zone, central United States, based on stepover models

    USGS Publications Warehouse

    Pratt, Thomas L.

    2012-01-01

    Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.

  19. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  20. Structural and petrophysical characterization: from outcrop rock analogue to reservoir model of deep geothermal prospect in Eastern France

    NASA Astrophysics Data System (ADS)

    Bertrand, Lionel; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément

    2017-04-01

    The Scientific Interest Group (GIS) GEODENERGIES with the REFLET project aims to develop a geological and reservoir model for fault zones that are the main targets for deep geothermal prospects in the West European Rift system. In this project, several areas are studied with an integrated methodology combining field studies, boreholes and geophysical data acquisition and 3D modelling. In this study, we present the results of reservoir rock analogues characterization of one of these prospects in the Valence Graben (Eastern France). The approach used is a structural and petrophysical characterization of the rocks outcropping at the shoulders of the rift in order to model the buried targeted fault zone. The reservoir rocks are composed of fractured granites, gneiss and schists of the Hercynian basement of the graben. The matrix porosity, permeability, P-waves velocities and thermal conductivities have been characterized on hand samples coming from fault zones at the outcrop. Furthermore, fault organization has been mapped with the aim to identify the characteristic fault orientation, spacing and width. The fractures statistics like the orientation, density, and length have been identified in the damaged zones and unfaulted blocks regarding the regional fault pattern. All theses data have been included in a reservoir model with a double porosity model. The field study shows that the fault pattern in the outcrop area can be classified in different fault orders, with first order scale, larger faults distribution controls the first order structural and lithological organization. Between theses faults, the first order blocks are divided in second and third order faults, smaller structures, with characteristic spacing and width. Third order fault zones in granitic rocks show a significant porosity development in the fault cores until 25 % in the most locally altered material, as the damaged zones develop mostly fractures permeabilities. In the gneiss and schists units, the matrix porosity and permeability development is mainly controlled by microcrack density enhancement in the fault zone unlike the granite rocks were it is mostly mineral alteration. Due to the grain size much important in the gneiss, the opening of the cracks is higher than in the schist samples. Thus, the matrix permeability can be two orders higher in the gneiss than in the schists (until 10 mD for gneiss and 0,1 mD for schists for the same porosity around 5%). Combining the regional data with the fault pattern, the fracture and matrix porosity and permeability, we are able to construct a double-porosity model suitable for the prospected graben. This model, combined with seismic data acquisition is a predictable tool for flow modelling in the buried reservoir and helps the prediction of borehole targets and design in the graben.

  1. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  2. Coulomb Stress Accumulation along the San Andreas Fault System

    NASA Technical Reports Server (NTRS)

    Smith, Bridget; Sandwell, David

    2003-01-01

    Stress accumulation rates along the primary segments of the San Andreas Fault system are computed using a three-dimensional (3-D) elastic half-space model with realistic fault geometry. The model is developed in the Fourier domain by solving for the response of an elastic half-space due to a point vector body force and analytically integrating the force from a locking depth to infinite depth. This approach is then applied to the San Andreas Fault system using published slip rates along 18 major fault strands of the fault zone. GPS-derived horizontal velocity measurements spanning the entire 1700 x 200 km region are then used to solve for apparent locking depth along each primary fault segment. This simple model fits remarkably well (2.43 mm/yr RMS misfit), although some discrepancies occur in the Eastern California Shear Zone. The model also predicts vertical uplift and subsidence rates that are in agreement with independent geologic and geodetic estimates. In addition, shear and normal stresses along the major fault strands are used to compute Coulomb stress accumulation rate. As a result, we find earthquake recurrence intervals along the San Andreas Fault system to be inversely proportional to Coulomb stress accumulation rate, in agreement with typical coseismic stress drops of 1 - 10 MPa. This 3-D deformation model can ultimately be extended to include both time-dependent forcing and viscoelastic response.

  3. Assessing active faulting by hydrogeological modeling and superconducting gravimetry: A case study for Hsinchu Fault, Taiwan

    NASA Astrophysics Data System (ADS)

    Lien, Tzuyi; Cheng, Ching-Chung; Hwang, Cheinway; Crossley, David

    2014-09-01

    We develop a new hydrology and gravimetry-based method to assess whether or not a local fault may be active. We take advantage of an existing superconducting gravimeter (SG) station and a comprehensive groundwater network in Hsinchu to apply the method to the Hsinchu Fault (HF) across the Hsinchu Science Park, whose industrial output accounts for 10% of Taiwan's gross domestic product. The HF is suspected to pose seismic hazards to the park, but its existence and structure are not clear. The a priori geometry of the HF is translated into boundary conditions imposed in the hydrodynamic model. By varying the fault's location, depth, and including a secondary wrench fault, we construct five hydrodynamic models to estimate groundwater variations, which are evaluated by comparing groundwater levels and SG observations. The results reveal that the HF contains a low hydraulic conductivity core and significantly impacts groundwater flows in the aquifers. Imposing the fault boundary conditions leads to about 63-77% reduction in the differences between modeled and observed values (both water level and gravity). The test with fault depth shows that the HF's most recent slip occurred in the beginning of Holocene, supplying a necessary (but not sufficient) condition that the HF is currently active. A portable SG can act as a virtual borehole well for model assessment at critical locations of a suspected active fault.

  4. Effects induced by an earthquake on its fault plane:a boundary element study

    NASA Astrophysics Data System (ADS)

    Bonafede, Maurizio; Neri, Andrea

    2000-04-01

    Mechanical effects left by a model earthquake on its fault plane, in the post-seismic phase, are investigated employing the `displacement discontinuity method'. Simple crack models, characterized by the release of a constant, unidirectional shear traction are investigated first. Both slip components-parallel and normal to the traction direction-are found to be non-vanishing and to depend on fault depth, dip, aspect ratio and fault plane geometry. The rake of the slip vector is similarly found to depend on depth and dip. The fault plane is found to suffer some small rotation and bending, which may be responsible for the indentation of a transform tectonic margin, particularly if cumulative effects are considered. Very significant normal stress components are left over the shallow portion of the fault surface after an earthquake: these are tensile for thrust faults, compressive for normal faults and are typically comparable in size to the stress drop. These normal stresses can easily be computed for more realistic seismic source models, in which a variable slip is assigned; normal stresses are induced in these cases too, and positive shear stresses may even be induced on the fault plane in regions of high slip gradient. Several observations can be explained from the present model: low-dip thrust faults and high-dip normal faults are found to be facilitated, according to the Coulomb failure criterion, in repetitive earthquake cycles; the shape of dip-slip faults near the surface is predicted to be upward-concave; and the shallower aftershock activity generally found in the hanging block of a thrust event can be explained by `unclamping' mechanisms.

  5. The impact of scatterometer wind data on global weather forecasting

    NASA Technical Reports Server (NTRS)

    Atlas, D.; Baker, W. E.; Kalnay, E.; Halem, M.; Woiceshyn, P. M.; Peteherych, S.

    1984-01-01

    The impact of SEASAT-A scatterometer (SASS) winds on coarse resolution atmospheric model forecasts was assessed. The scatterometer provides high resolution winds, but each wind can have up to four possible directions. One wind direction is correct; the remainder are ambiguous or "aliases'. In general, the effect of objectively dealiased-SASS data was found to be negligible in the Northern Hemisphere. In the Southern Hemisphere, the impact was larger and primarily beneficial when vertical temperature profile radiometer (VTPR) data was excluded. However, the inclusion of VTPR data eliminates the positive impact, indicating some redundancy between the two data sets.

  6. UAV-based photogrammetry combination of the elevational outcrop and digital surface models: an example of Sanyi active fault in western Taiwan

    NASA Astrophysics Data System (ADS)

    Hsieh, Cheng-En; Huang, Wen-Jeng; Chang, Ping-Yu; Lo, Wei

    2016-04-01

    An unmanned aerial vehicle (UAV) with a digital camera is an efficient tool for geologists to investigate structure patterns in the field. By setting ground control points (GCPs), UAV-based photogrammetry provides high-quality and quantitative results such as a digital surface model (DSM) and orthomosaic and elevational images. We combine the elevational outcrop 3D model and a digital surface model together to analyze the structural characteristics of Sanyi active fault in Houli-Fengyuan area, western Taiwan. Furthermore, we collect resistivity survey profiles and drilling core data in the Fengyuan District in order to build the subsurface fault geometry. The ground sample distance (GSD) of an elevational outcrop 3D model is 3.64 cm/pixel in this study. Our preliminary result shows that 5 fault branches are distributed 500 meters wide on the elevational outcrop and the width of Sanyi fault zone is likely much great than this value. Together with our field observations, we propose a structural evolution model to demonstrate how the 5 fault branches developed. The resistivity survey profiles show that Holocene gravel was disturbed by the Sanyi fault in Fengyuan area.

  7. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  8. Material and Stress Rotations: Anticipating the 1992 Landers, CA Earthquake

    NASA Astrophysics Data System (ADS)

    Nur, A. M.

    2014-12-01

    "Rotations make nonsense of the two-dimensional reconstructions that are still so popular among structural geologists". (McKenzie, 1990, p. 109-110) I present a comprehensive tectonic model for the strike-slip fault geometry, seismicity, material rotation, and stress rotation, in which new, optimally oriented faults can form when older ones have rotated about a vertical axis out of favorable orientations. The model was successfully tested in the Mojave region using stress rotation and three independent data sets: the alignment of epicenters and fault plane solutions from the six largest central Mojave earthquakes since 1947, material rotations inferred from paleomagnetic declination anomalies, and rotated dike strands of the Independence dike swarm. The model led not only to the anticipation of the 1992 M7.3 Landers, CA earthquake but also accounts for the great complexity of the faulting and seismicity of this event. The implication of this model for crustal deformation in general is that rotations of material (faults and the blocks between them) and of stress provide the key link between the complexity of faults systems in-situ and idealized mechanical theory of faulting. Excluding rotations from the kinematical and mechanical analysis of crustal deformation makes it impossible to explain the complexity of what geologists see in faults, or what seismicity shows us about active faults. However, when we allow for rotation of material and stress, Coulomb's law becomes consistent with the complexity of faults and faulting observed in situ.

  9. Point target detection utilizing super-resolution strategy for infrared scanning oversampling system

    NASA Astrophysics Data System (ADS)

    Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei

    2017-11-01

    To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.

  10. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  11. Length-Displacement Scaling of Lunar Thrust Faults and the Formation of Uphill-Facing Scarps

    NASA Astrophysics Data System (ADS)

    Hiesinger, Harald; Roggon, Lars; Hetzel, Ralf; Clark, Jaclyn D.; Hampel, Andrea; van der Bogert, Carolyn H.

    2017-04-01

    Lobate scarps are straight to curvilinear positive-relief landforms that occur on all terrestrial bodies [e.g., 1-3]. They are the surface manifestation of thrust faults that cut through and offset the upper part of the crust. Fault scarps on planetary surfaces provide the opportunity to study the growth of faults under a wide range of environmental conditions (e.g., gravity, temperature, pore pressure) [4]. We studied four lunar thrust-fault scarps (Simpelius-1, Morozov (S1), Fowler, Racah X-1) ranging in length from 1.3 km to 15.4 km [5] and found that their maximum total displacements are linearly correlated with length over one order of magnitude. We propose that during the progressive accumulation of slip, lunar faults propagate laterally and increase in length. On the basis of our measurements, the ratio of maximum displacement, D, to fault length, L, ranges from 0.017 to 0.028 with a mean value of 0.023 (or 2.3%). This is an order of magnitude higher than the value of 0.1% derived by theoretical considerations [4], and about twice as large as the value of 0.012-0.013 estimated by [6,7]. Our results, in addition to recently published findings for other lunar scarps [2,8], indicate that the D/L ratios of lunar thrust faults are similar to those of faults on Mercury and Mars (e.g., 1, 9-11], and almost as high as the average D/L ratio of 3% for faults on Earth [16,23]. Three of the investigated thrust fault scarps (Simpelius-1, Morozov (S1), Fowler) are uphill-facing scarps generated by slip on faults that dip in the same direction as the local topography. Thrust faults with such a geometry are common ( 60% of 97 studied scarps) on the Moon [e.g., 2,5,7]. To test our hypothesis that the surface topography plays an important role in the formation of uphill-facing fault scarps by controlling the vertical load on a fault plane, we simulated thrust faulting and its relation to topography with two-dimensional finite-element models using the commercial code ABAQUS (version 6.14). Our model results indicate that the onset of faulting in our 200-km-long model is a function of the surface topography [5]. Our numerical model indicates that uphill-facing scarps form earlier and grow faster than downhill-facing scarps under otherwise similar conditions. Thrust faults which dip in the same general direction as the topography (forming an uphill-facing scarp), start to slip earlier (4.2 Ma) after the onset of shortening and reach a total slip of 5.8 m after 70 Ma. In contrast, slip on faults that leads to the generation of a downhill-facing scarp initiates much later (i.e., after 20 Ma of elapsed model time) and attains a total slip of only 1.8 m in 70 Ma. If the surface of the model is horizontal, faulting on both fault structures starts after 4.4 Ma, but faulting proceeds at a lower rate than for fault, which generated the uphill-facing scarp. Although the absolute ages for fault initiation (as well as the total fault slip) depend on the arbitrarily chosen shortening rate (as well as on the size of the model and the elastic parameters), this relative timing of fault activation was consistently observed irrespective of the chosen shortening rate. Thus, the model results demonstrate that, for all other factors being equal, the differing weight of the hanging wall above the two modeled faults is responsible for the different timing of fault initiation and the difference in total slip. In conclusion, we present new quantitative estimates of the maximum total displacements of lunar lobate scarps and offer a new model to explain the origin of uphill-facing scarps that is also of importance for understanding the formation of the Lee-Lincoln scarp at the Apollo 17 landing site. [1] Watters et al., 2000, Geophys. Res. Lett. 27; [2] Williams et al., 2013, J. Geophys. Res. 118; [3] Massironi et al., 2015, Encycl. Planet. Landf., pp. 1255-1262; [4] Schultz et al., 2006, J. Struct. Geol. 28; [5] Roggon et al. (2017) Icarus, in press; [6] Watters and Johnson, 2010, Planetary Tectonics, pp. 121-182; [7] Banks et al., 2012, J. Geophys. Res. 117; [8] Banks et al., 2013, LPSC 44, 3042; [9] Hauber and Kronberg, 2005, J. Geophys. Res. 110; [10] Hauber et al., 2013, EPSC2013-987; [11] Byrne et al., 2014, Nature Geosci. 7

  12. Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.

    2013-12-01

    Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.

  13. Vibration signal models for fault diagnosis of planet bearings

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2016-05-01

    Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.

  14. A fault injection experiment using the AIRLAB Diagnostic Emulation Facility

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Mangum, Scott; Scheper, Charlotte

    1988-01-01

    The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.

  15. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  16. Evolution of Pull-Apart Basins and Their Scale Independence

    NASA Astrophysics Data System (ADS)

    Aydin, Atilla; Nur, Amos

    1982-02-01

    Pull-apart basins or rhomb grabens and horsts along major strike-slip fault systems in the world are generally associated with horizontal slip along faults. A simple model suggests that the width of the rhombs is controlled by the initial fault geometry, whereas the length increases with increasing fault displacement. We have tested this model by analyzing the shapes of 70 well-defined rhomb-like pull-apart basins and pressure ridges, ranging from tens of meters to tens of kilometers in length, associated with several major strike-slip faults in the western United States, Israel, Turkey, Iran, Guatemala, Venezuela, and New Zealand. In conflict with the model, we find that the length to width ratio of these basins is a constant value of approximately 3; these basins become wider as they grow longer with increasing fault offset. Two possible mechanisms responsible for the increase in width are suggested: (1) coalescence of neighboring rhomb grabens as each graben increases its length and (2) formation of fault strands parallel to the existing ones when large displacements need to be accommodated. The processes of formation and growth of new fault strands promote interaction among the new faults and between the new and preexisting faults on a larger scale. Increased displacement causes the width of the fault zone to increase resulting in wider pull-apart basins.

  17. Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G.

    2000-01-01

    The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.

  18. Salt movements and faulting of the overburden - can numerical modeling predict the fault patterns above salt structures?

    NASA Astrophysics Data System (ADS)

    Clausen, O. R.; Egholm, D. L.; Wesenberg, R.

    2012-04-01

    Salt deformation has been the topic of numerous studies through the 20th century and up until present because of the close relation between commercial hydrocarbons and salt structure provinces of the world (Hudec & Jackson, 2007). The fault distribution in sediments above salt structures influences among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control the rheology of the deforming overburden, the mechanical coupling between the overburden and the underlying salt, as well as the kinematics of the moving salt structure. In this presentation, we demonstrate how the horizontal component on the salt motion influences the fracture patterns within the overburden. The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs. radiating) can thus be linked to two different styles of salt movement: i) the vertical rising of a salt indenter and ii) the inflation of a 'salt-balloon' beneath the deformed strata. The results are in accordance with published analogue and theoretical models, as well as natural systems, and the model may - when used appropriately - provide new insight into how the internal dynamics of the salt in a structure controls the generation of fault patterns above the structure. The model is thus an important contribution to the understanding of small-scale faults, which may be unresolved by seismic data when the hydrocarbon production from reservoirs located above salt structures is optimized.

  19. Along-strike variations of the partitioning of convergence across the Haiyuan fault system detected by InSAR

    NASA Astrophysics Data System (ADS)

    Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M.-P.; Barbot, S.; Tapponnier, P.; Peltzer, G.; Socquet, A.; Sun, J.

    2016-04-01

    Oblique convergence across Tibet leads to slip partitioning with the coexistence of strike-slip, normal and thrust motion on major fault systems. A key point is to understand and model how faults interact and accumulate strain at depth. Here, we extract ground deformation across the Haiyuan Fault restraining bend, at the northeastern boundary of the Tibetan plateau, from Envisat radar data spanning the 2001-2011 period. We show that the complexity of the surface displacement field can be explained by the partitioning of a uniform deep-seated convergence. Mountains and sand dunes in the study area make the radar data processing challenging and require the latest developments in processing procedures for Synthetic Aperture Radar interferometry. The processing strategy is based on a small baseline approach. Before unwrapping, we correct for atmospheric phase delays from global atmospheric models and digital elevation model errors. A series of filtering steps is applied to improve the signal-to-noise ratio across high ranges of the Tibetan plateau and the phase unwrapping capability across the fault, required for reliable estimate of fault movement. We then jointly invert our InSAR time-series together with published GPS displacements to test a proposed long-term slip-partitioning model between the Haiyuan and Gulang left-lateral Faults and the Qilian Shan thrusts. We explore the geometry of the fault system at depth and associated slip rates using a Bayesian approach and test the consistency of present-day geodetic surface displacements with a long-term tectonic model. We determine a uniform convergence rate of 10 [8.6-11.5] mm yr-1 with an N89 [81-97]°E across the whole fault system, with a variable partitioning west and east of a major extensional fault-jog (the Tianzhu pull-apart basin). Our 2-D model of two profiles perpendicular to the fault system gives a quantitative understanding of how crustal deformation is accommodated by the various branches of this thrust/strike-slip fault system and demonstrates how the geometry of the Haiyuan fault system controls the partitioning of the deep secular motion.

  20. Development of kink bands in granodiorite: Effect of mechanical heterogeneities, fault geometry, and friction

    NASA Astrophysics Data System (ADS)

    Chheda, T. D.; Nevitt, J. M.; Pollard, D. D.

    2014-12-01

    The formation of monoclinal right-lateral kink bands in Lake Edison granodiorite (central Sierra Nevada, CA) is investigated through field observations and mechanics based numerical modeling. Vertical faults act as weak surfaces within the granodiorite, and vertical granodiorite slabs bounded by closely-spaced faults curve into a kink. Leucocratic dikes are observed in association with kinking. Measurements were made on maps of Hilgard, Waterfall, Trail Fork, Kip Camp (Pollard and Segall, 1983b) and Bear Creek kink bands (Martel, 1998). Outcrop scale geometric parameters such as fault length andspacing, kink angle, and dike width are used to construct a representative geometry to be used in a finite element model. Three orders of fault were classified, length = 1.8, 7.2 and 28.8 m, and spacing = 0.3, 1.2 and 3.6 m, respectively. The model faults are oriented at 25° to the direction of shortening (horizontal most compressive stress), consistent with measurements of wing crack orientations in the field area. The model also includes a vertical leucocratic dike, oriented perpendicular to the faults and with material properties consistent with aplite. Curvature of the deformed faults across the kink band was used to compare the effects of material properties, strain, and fault and dike geometry. Model results indicate that the presence of the dike, which provides a mechanical heterogeneity, is critical to kinking in these rocks. Keeping properties of the model granodiorite constant, curvature increased with decrease in yield strength and Young's modulus of the dike. Curvature increased significantly as yield strength decreased from 95 to 90 MPa, and below this threshold value, limb rotation for the kink band was restricted to the dike. Changing Poisson's ratio had no significant effect. The addition of small faults between bounding faults, decreasing fault spacing or increasing dike width increases the curvature. Increasing friction along the faults decreases slip, so the shortening is accommodated by more kinking. Analysis of these parameters also gives us an insight concerning the kilometer-scale kink band in the Mount Abbot Quadrangle, where the Rosy Finch Shear Zone may provide the mechanical heterogeneity that is necessary to cause kinking.

  1. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    NASA Astrophysics Data System (ADS)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent bounds on in-situ fault properties like permeability, storage coefficient, resolved stresses, friction and the shear modulus, our results also show that fitting the complete observed time history of slip requires alternative model considerations, such as variations in fault mechanical properties or friction coefficient with slip.

  2. Vertical tectonic deformation associated with the San Andreas fault zone offshore of San Francisco, California

    USGS Publications Warehouse

    Ryan, H.F.; Parsons, T.; Sliter, R.W.

    2008-01-01

    A new fault map of the shelf offshore of San Francisco, California shows that faulting occurs as a distributed shear zone that involves many fault strands with the principal displacement taken up by the San Andreas fault and the eastern strand of the San Gregorio fault zone. Structures associated with the offshore faulting show compressive deformation near where the San Andreas fault goes offshore, but deformation becomes extensional several km to the north off of the Golden Gate. Our new fault map serves as the basis for a 3-D finite element model that shows that the block between the San Andreas and San Gregorio fault zone is subsiding at a long-term rate of about 0.2-0.3??mm/yr, with the maximum subsidence occurring northwest of the Golden Gate in the area of a mapped transtensional basin. Although the long-term rates of vertical displacement primarily show subsidence, the model of coseismic deformation associated with the 1906 San Francisco earthquake indicates that uplift on the order of 10-15??cm occurred in the block northeast of the San Andreas fault. Since 1906, 5-6??cm of regional subsidence has occurred in that block. One implication of our model is that the transfer of slip from the San Andreas fault to a fault 5??km to the east, the Golden Gate fault, is not required for the area offshore of San Francisco to be in extension. This has implications for both the deposition of thick Pliocene-Pleistocene sediments (the Merced Formation) observed east of the San Andreas fault, and the age of the Peninsula segment of the San Andreas fault.

  3. Gas Path On-line Fault Diagnostics Using a Nonlinear Integrated Model for Gas Turbine Engines

    NASA Astrophysics Data System (ADS)

    Lu, Feng; Huang, Jin-quan; Ji, Chun-sheng; Zhang, Dong-dong; Jiao, Hua-bin

    2014-08-01

    Gas turbine engine gas path fault diagnosis is closely related technology that assists operators in managing the engine units. However, the performance gradual degradation is inevitable due to the usage, and it result in the model mismatch and then misdiagnosis by the popular model-based approach. In this paper, an on-line integrated architecture based on nonlinear model is developed for gas turbine engine anomaly detection and fault diagnosis over the course of the engine's life. These two engine models have different performance parameter update rate. One is the nonlinear real-time adaptive performance model with the spherical square-root unscented Kalman filter (SSR-UKF) producing performance estimates, and the other is a nonlinear baseline model for the measurement estimates. The fault detection and diagnosis logic is designed to discriminate sensor fault and component fault. This integration architecture is not only aware of long-term engine health degradation but also effective to detect gas path performance anomaly shifts while the engine continues to degrade. Compared to the existing architecture, the proposed approach has its benefit investigated in the experiment and analysis.

  4. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  5. Geomechanical Modeling for Improved CO2 Storage Security

    NASA Astrophysics Data System (ADS)

    Rutqvist, J.; Rinaldi, A. P.; Cappa, F.; Jeanne, P.; Mazzoldi, A.; Urpi, L.; Vilarrasa, V.; Guglielmi, Y.

    2017-12-01

    This presentation summarizes recent modeling studies on geomechanical aspects related to Geologic Carbon Sequestration (GCS,) including modeling potential fault reactivation, seismicity and CO2 leakage. The model simulations demonstrates that the potential for fault reactivation and the resulting seismic magnitude as well as the potential for creating a leakage path through overburden sealing layers (caprock) depends on a number of parameters such as fault orientation, stress field, and rock properties. The model simulations further demonstrate that seismic events large enough to be felt by humans requires brittle fault properties as well as continuous fault permeability allowing for the pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicity and also effectively impede upward CO2 leakage. Site specific model simulations of the In Salah CO2 storage site showed that deep fractured zone responses and associated seismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. It is suggested that coupled geomechanical modeling be used to guide the site selection and assisting in identification of locations most prone to unwanted and damaging geomechanical changes, and to evaluate potential consequence of such unwanted geomechanical changes. The geomechanical modeling can be used to better estimate the maximum sustainable injection rate or reservoir pressure and thereby provide for improved CO2 storage security. Whether damaging geomechanical changes could actually occur very much depends on the local stress field and local reservoir properties such the presence of ductile rock and faults (which can aseismically accommodate for the stress and strain induced by the injection) or, on the contrary, the presence of more brittle faults that, if critically stressed for shear, might be more prone to induce felt seismicity.

  6. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  7. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  8. How do horizontal, frictional discontinuities affect reverse fault-propagation folding?

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-09-01

    The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.

  9. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  10. Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.; Osipov, Vyatcheslav V.; Timucin, Dogan A.; Uckun, Serdar

    2009-01-01

    We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed.

  11. Modeling of a latent fault detector in a digital system

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.

    1978-01-01

    Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.

  12. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  13. The emergence of asymmetric normal fault systems under symmetric boundary conditions

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin P. J.; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Nicol, Andrew; Grasemann, Bernhard

    2017-11-01

    Many normal fault systems and, on a smaller scale, fracture boudinage often exhibit asymmetry with one fault dip direction dominating. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing. Moreover, domains of parallel faults are frequently used to infer the presence of a décollement. Using Distinct Element Method (DEM) modelling we show, that asymmetric fault systems can emerge under symmetric boundary conditions. A statistical analysis of DEM models suggests that the fault dip directions and system polarities can be explained using a random process if the strength contrast between the brittle layer and the surrounding material is high. The models indicate that domino and shear band boudinage are unreliable shear-sense indicators. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults alone.

  14. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.

  15. Fault detection of Tennessee Eastman process based on topological features and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen

    2018-03-01

    Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.

  16. Kinematics of shallow backthrusts in the Seattle fault zone, Washington State

    USGS Publications Warehouse

    Pratt, Thomas L.; Troost, K.G.; Odum, Jackson K.; Stephenson, William J.

    2015-01-01

    Near-surface thrust fault splays and antithetic backthrusts at the tips of major thrust fault systems can distribute slip across multiple shallow fault strands, complicating earthquake hazard analyses based on studies of surface faulting. The shallow expression of the fault strands forming the Seattle fault zone of Washington State shows the structural relationships and interactions between such fault strands. Paleoseismic studies document an ∼7000 yr history of earthquakes on multiple faults within the Seattle fault zone, with some backthrusts inferred to rupture in small (M ∼5.5–6.0) earthquakes at times other than during earthquakes on the main thrust faults. We interpret seismic-reflection profiles to show three main thrust faults, one of which is a blind thrust fault directly beneath downtown Seattle, and four small backthrusts within the Seattle fault zone. We then model fault slip, constrained by shallow deformation, to show that the Seattle fault forms a fault propagation fold rather than the alternatively proposed roof thrust system. Fault slip modeling shows that back-thrust ruptures driven by moderate (M ∼6.5–6.7) earthquakes on the main thrust faults are consistent with the paleoseismic data. The results indicate that paleoseismic data from the back-thrust ruptures reveal the times of moderate earthquakes on the main fault system, rather than indicating smaller (M ∼5.5–6.0) earthquakes involving only the backthrusts. Estimates of cumulative shortening during known Seattle fault zone earthquakes support the inference that the Seattle fault has been the major seismic hazard in the northern Cascadia forearc in the late Holocene.

  17. Fault Mechanics and Post-seismic Deformation at Bam, SE Iran

    NASA Astrophysics Data System (ADS)

    Wimpenny, S. E.; Copley, A.

    2017-12-01

    The extent to which aseismic deformation relaxes co-seismic stress changes on a fault zone is fundamental to assessing the future seismic hazard following any earthquake, and in understanding the mechanical behaviour of faults. We used models of stress-driven afterslip and visco-elastic relaxation, in conjunction with a dense time series of post-seismic InSAR measurements, to show that there has been minimal release of co-seismic stress changes through post-seismic deformation following the 2003 Mw 6.6 Bam earthquake. Our modelling indicates that the faults at Bam may remain predominantly locked, and that the co- plus inter-seismically accumulated elastic strain stored down-dip of the 2003 rupture patch may be released in a future Mw 6 earthquake. Modelling also suggests parts of the fault that experienced post-seismic creep between 2003-2009 overlapped with areas that also slipped co-seismically. Our observations and models also provide an opportunity to probe how aseismic fault slip leads to the growth of topography at Bam. We find that, for our modelled afterslip distribution to be consistent with forming the sharp step in the local topography at Bam over repeated earthquake cycles, and also to be consistent with the geodetic observations, requires either (1) far-field tectonic loading equivalent to a 2-10 MPa deviatoric stress acting across the fault system, which suggests it supports stresses 60-100 times less than classical views of static fault strength, or (2) that the fault surface has some form of mechanical anisotropy, potentially related to corrugations on the fault plane, that controls the sense of slip.

  18. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  19. Taking apart the Big Pine fault: Redefining a major structural feature in southern California

    USGS Publications Warehouse

    Onderdonk, N.W.; Minor, S.A.; Kellogg, K.S.

    2005-01-01

    New mapping along the Big Pine fault trend in southern California indicates that this structural alignment is actually three separate faults, which exhibit different geometries, slip histories, and senses of offset since Miocene time. The easternmost fault, along the north side of Lockwood Valley, exhibits left-lateral reverse Quaternary displacement but was a north dipping normal fault in late Oligocene to early Miocene time. The eastern Big Pine fault that bounds the southern edge of the Cuyama Badlands is a south dipping reverse fault that is continuous with the San Guillermo fault. The western segment of the Big Pine fault trend is a north dipping thrust fault continuous with the Pine Mountain fault and delineates the northern boundary of the rotated western Transverse Ranges terrane. This redefinition of the Big Pine fault differs greatly from the previous interpretation and significantly alters regional tectonic models and seismic risk estimates. The outcome of this study also demonstrates that basic geologic mapping is still needed to support the development of geologic models. Copyright 2005 by the American Geophysical Union.

  20. Multi-Fault Rupture Scenarios in the Brawley Seismic Zone

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.

    2017-12-01

    Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger event. For that reason, we are investigating scenarios of a moderate rupture on a cross fault, and determining conditions under which the rupture will propagate onto the adjacent SSAF. Our investigation will provide fundamental insights that may help us interpret faulting behaviors in other areas, such as the complex Mw 7.8 2016 Kaikoura (New Zealand) earthquake.

  1. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.

  2. Variations in creep rate along the Hayward Fault, California, interpreted as changes in depth of creep

    USGS Publications Warehouse

    Simpson, R.W.; Lienkaemper, J.J.; Galehouse, J.S.

    2001-01-01

    Variations ill surface creep rate along the Hayward fault are modeled as changes in locking depth using 3D boundary elements. Model creep is driven by screw dislocations at 12 km depth under the Hayward and other regional faults. Inferred depth to locking varies along strike from 4-12 km. (12 km implies no locking.) Our models require locked patches under the central Hayward fault, consistent with a M6.8 earthquake in 1868, but the geometry and extent of locking under the north and south ends depend critically on assumptions regarding continuity and creep behavior of the fault at its ends. For the northern onshore part of the fault, our models contain 1.4-1.7 times more stored moment than the model of Bu??rgmann et al. [2000]; 45-57% of this stored moment resides in creeping areas. It is important for seismic hazard estimation to know how much of this moment is released coseismically or as aseismic afterslip.

  3. Theoretical investigation of the formation of basal plane stacking faults in heavily nitrogen-doped 4H-SiC crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, Chisato; Ichimura, Aiko; Ohtani, Noboru, E-mail: ohtani.noboru@kwansei.ac.jp

    The formation of basal plane stacking faults in heavily nitrogen-doped 4H-SiC crystals was theoretically investigated. A novel theoretical model based on the so-called quantum well action mechanism was proposed; the model considers several factors, which were overlooked in a previously proposed model, and provides a detailed explanation of the annealing-induced formation of double layer Shockley-type stacking faults in heavily nitrogen-doped 4H-SiC crystals. We further revised the model to consider the carrier distribution in the depletion regions adjacent to the stacking fault and successfully explained the shrinkage of stacking faults during annealing at even higher temperatures. The model also succeeded inmore » accounting for the aluminum co-doping effect in heavily nitrogen-doped 4H-SiC crystals, in that the stacking fault formation is suppressed when aluminum acceptors are co-doped in the crystals.« less

  4. Constraining the Distribution of Vertical Slip on the South Heli Shan Fault (Northeastern Tibet) From High-Resolution Topographic Data

    NASA Astrophysics Data System (ADS)

    Bi, Haiyun; Zheng, Wenjun; Ge, Weipeng; Zhang, Peizhen; Zeng, Jiangyuan; Yu, Jingxing

    2018-03-01

    Reconstruction of the along-fault slip distribution provides an insight into the long-term rupture patterns of a fault, thereby enabling more accurate assessment of its future behavior. The increasing wealth of high-resolution topographic data, such as Light Detection and Ranging and photogrammetric digital elevation models, allows us to better constrain the slip distribution, thus greatly improving our understanding of fault behavior. The South Heli Shan Fault is a major active fault on the northeastern margin of the Tibetan Plateau. In this study, we built a 2 m resolution digital elevation model of the South Heli Shan Fault based on high-resolution GeoEye-1 stereo satellite imagery and then measured 302 vertical displacements along the fault, which increased the measurement density of previous field surveys by a factor of nearly 5. The cumulative displacements show an asymmetric distribution along the fault, comprising three major segments. An increasing trend from west to east indicates that the fault has likely propagated westward over its lifetime. The topographic relief of Heli Shan shows an asymmetry similar to the measured cumulative slip distribution, suggesting that the uplift of Heli Shan may result mainly from the long-term activity of the South Heli Shan Fault. Furthermore, the cumulative displacements divide into discrete clusters along the fault, indicating that the fault has ruptured in several large earthquakes. By constraining the slip-length distribution of each rupture, we found that the events do not support a characteristic recurrence model for the fault.

  5. Extensional Fault Evolution and its Flexural Isostatic Response During Iberia-Newfoundland Rifted Margin Formation

    NASA Astrophysics Data System (ADS)

    Gómez-Romeu, J.; Kusznir, N.; Manatschal, G.; Roberts, A.

    2017-12-01

    During the formation of magma-poor rifted margins, upper lithosphere thinning and stretching is achieved by extensional faulting, however, there is still debate and uncertainty how faults evolve during rifting leading to breakup. Seismic data provides an image of the present-day structural and stratigraphic configuration and thus initial fault geometry is unknown. To understand the geometric evolution of extensional faults at rifted margins it is extremely important to also consider the flexural response of the lithosphere produced by fault displacement resulting in footwall uplift and hangingwall subsidence. We investigate how the flexural isostatic response to extensional faulting controls the structural development of rifted margins. To achieve our aim, we use a kinematic forward model (RIFTER) which incorporates the flexural isostatic response to extensional faulting, crustal thinning, lithosphere thermal loads, sedimentation and erosion. Inputs for RIFTER are derived from seismic reflection interpretation and outputs of RIFTER are the prediction of the structural and stratigraphic consequences of recursive sequential faulting and sedimentation. Using RIFTER we model the simultaneous tectonic development of the Iberia-Newfoundland conjugate rifted margins along the ISE01-SCREECH1 and TGS/LG12-SCREECH2 seismic lines. We quantitatively test and calibrate the model against observed target data restored to breakup time. Two quantitative methods are used to obtain this target data: (i) gravity anomaly inversion which predicts Moho depth and continental lithosphere thinning and (ii) reverse post-rift subsidence modelling to give water and Moho depths at breakup time. We show that extensional faulting occurs on steep ( 60°) normal faults in both proximal and distal parts of rifted margins. Extensional faults together with their flexural isostatic response produce not only sub-horizontal exhumed footwall surfaces (i.e. the rolling hinge model) and highly rotated (60° or more) pre- and syn-rift stratigraphy, but also extensional allochthons underlain by apparent horizontal detachments. These detachment faults were never active in this sub-horizontal geometry; they were only active as steep faults which were isostatically rotated to their present sub-horizontal position.

  6. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.

  7. Insights into the relationship between surface and subsurface activity from mechanical modeling of the 1992 Landers M7.3 earthquake

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Pollard, D. D.

    2009-12-01

    Multi-fault, strike-slip earthquakes have proved difficult to incorporate into seismic hazard analyses due to the difficulty of determining the probability of these ruptures, despite collection of extensive data associated with such events. Modeling the mechanical behavior of these complex ruptures contributes to a better understanding of their occurrence by elucidating the relationship between surface and subsurface earthquake activity along transform faults. This insight is especially important for hazard mitigation, as multi-fault systems can produce earthquakes larger than those associated with any one fault involved. We present a linear elastic, quasi-static model of the southern portion of the 28 June 1992 Landers earthquake built in the boundary element software program Poly3D. This event did not rupture the extent of any one previously mapped fault, but trended 80km N and NW across segments of five sub-parallel, N-S and NW-SE striking faults. At M7.3, the earthquake was larger than the potential earthquakes associated with the individual faults that ruptured. The model extends from the Johnson Valley Fault, across the Landers-Kickapoo Fault, to the Homestead Valley Fault, using data associated with a six-week time period following the mainshock. It honors the complex surface deformation associated with this earthquake, which was well exposed in the desert environment and mapped extensively in the field and from aerial photos in the days immediately following the earthquake. Thus, the model incorporates the non-linearity and segmentation of the main rupture traces, the irregularity of fault slip distributions, and the associated secondary structures such as strike-slip splays and thrust faults. Interferometric Synthetic Aperture Radar (InSAR) images of the Landers event provided the first satellite images of ground deformation caused by a single seismic event and provide constraints on off-fault surface displacement in this six-week period. Insight is gained by comparing the density, magnitudes and focal plane orientations of relocated aftershocks for this time frame with the magnitude and orientation of planes of maximum Coulomb shear stress around the fault planes at depth.

  8. Network Connectivity for Permanent, Transient, Independent, and Correlated Faults

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sicher, Courtney; henry, Courtney

    2012-01-01

    This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.

  9. Global Sampling for Integrating Physics-Specific Subsystems and Quantifying Uncertainties of CO 2 Geological Sequestration

    DOE PAGES

    Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...

    2012-12-20

    The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Tong, C.; Trainor-Guitten, W. J.

    The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less

  11. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    PubMed

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Coseismic source model of the 2003 Mw 6.8 Chengkung earthquake, Taiwan, determined from GPS measurements

    USGS Publications Warehouse

    Ching, K.-E.; Rau, R.-J.; Zeng, Y.

    2007-01-01

    A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.

  13. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  14. Modeling Of Spontaneous Multiscale Roughening And Branching of Ruptures Propagating On A Slip-Weakening Frictional Fault

    NASA Astrophysics Data System (ADS)

    Elbanna, A. E.

    2013-12-01

    Numerous field and experimental observations suggest that faults surfaces are rough at multiple scales and tend to produce a wide range of branch sizes ranging from micro-branching to large scale secondary faults. The development and evolution of fault roughness and branching is believed to play an important role in rupture dynamics and energy partitioning. Previous work by several groups has succeeded in determining conditions under which a main rupture may branch into a secondary fault. Recently, there great progress has been made in investigating rupture propagation on rough faults with and without off-fault plasticity. Nonetheless, in most of these models the heterogeneity, whether the roughness profile or the secondary faults orientation, was built into the system from the beginning and consequently the final outcome depends strongly on the initial conditions. Here we introduce an adaptive mesh technique for modeling mode-II crack propagation on slip weakening frictional interfaces. We use a Finite Element Framework with random mesh topology that adapts to crack dynamics through element splitting and sequential insertion of frictional interfaces dictated by the failure criterion. This allows the crack path to explore non-planar paths and develop the roughness profile that is most compatible with the dynamical constraints. It also enables crack branching at different scales. We quantify energy dissipation due to the roughening process and small scale branching. We compare the results of our model to a reference case for propagation on a planar fault. We show that the small scale processes of roughening and branching influence many characteristics of the rupture propagation including the energy partitioning, rupture speed and peak slip rates. We also estimate the fracture energy required for propagating a crack on a planar fault that will be required to produce comparable results. We anticipate that this modeling approach provides an attractive methodology that complements the current efforts in modeling off-fault plasticity and damage.

  15. Application of digital image processing techniques to astronomical imagery 1980

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1981-01-01

    Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.

  16. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  17. Fault creep and strain partitioning in Trinidad-Tobago: Geodetic measurements, models, and origin of creep

    NASA Astrophysics Data System (ADS)

    La Femina, P.; Weber, J. C.; Geirsson, H.; Latchman, J. L.; Robertson, R. E. A.; Higgins, M.; Miller, K.; Churches, C.; Shaw, K.

    2017-12-01

    We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then by modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic (interseimic = between major earthquakes) fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and over-pressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure, as an alternative or in addition to weak mineral phases in the fault zone, may thus cause the CRF fault creep and the lack of seismicity that we observe.

  18. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  19. Observations, models, and mechanisms of failure of surface rocks surrounding planetary surface loads

    NASA Technical Reports Server (NTRS)

    Schultz, R. A.; Zuber, M. T.

    1994-01-01

    Geophysical models of flexural stresses in an elastic lithosphere due to an axisymmetric surface load typically predict a transition with increased distance from the center of the load of radial thrust faults to strike-slip faults to concentric normal faults. These model predictions are in conflict with the absence of annular zones of strike-slip faults around prominent loads such as lunar maria, Martian volcanoes, and the Martian Tharsis rise. We suggest that this paradox arises from difficulties in relating failure criteria for brittle rocks to the stress models. Indications that model stresses are inappropriate for use in fault-type prediction include (1) tensile principal stresses larger than realistic values of rock tensile strength, and/or (2) stress differences significantly larger than those allowed by rock-strength criteria. Predictions of surface faulting that are consistent with observations can be obtained instead by using tensile and shear failure criteria, along with calculated stress differences and trajectories, with model stress states not greatly in excess of the maximum allowed by rock fracture criteria.

  20. Oceanic transform faults: how and why do they form? (Invited)

    NASA Astrophysics Data System (ADS)

    Gerya, T.

    2013-12-01

    Oceanic transform faults at mid-ocean ridges are often considered to be the direct product of plate breakup process (cf. review by Gerya, 2012). In contrast, recent 3D thermomechanical numerical models suggest that transform faults are plate growth structures, which develop gradually on a timescale of few millions years (Gerya, 2010, 2013a,b). Four subsequent stages are predicted for the transition from rifting to spreading (Gerya, 2013b): (1) crustal rifting, (2) multiple spreading centers nucleation and propagation, (3) proto-transform faults initiation and rotation and (4) mature ridge-transform spreading. Geometry of the mature ridge-transform system is governed by geometrical requirements for simultaneous accretion and displacement of new plate material within two offset spreading centers connected by a sustaining rheologically weak transform fault. According to these requirements, the characteristic spreading-parallel orientation of oceanic transform faults is the only thermomechanically consistent steady state orientation. Comparison of modeling results with the Woodlark Basin suggests that the development of this incipient spreading region (Taylor et al., 2009) closely matches numerical predictions (Gerya, 2013b). Model reproduces well characteristic 'rounded' contours of the spreading centers as well as the presence of a remnant of the broken continental crustal bridge observed in the Woodlark basin. Similarly to the model, the Moresby (proto)transform terminates in the oceanic rather than in the continental crust. Transform margins and truncated tip of one spreading center present in the model are documented in nature. In addition, numerical experiments suggest that transform faults can develop gradually at mature linear mid-ocean ridges as the result of dynamical instability (Gerya, 2010). Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps. The ridge instability is governed by rheological weakening of active fault structures. The instability is most efficient for slow to intermediate spreading rates, whereas ultraslow and (ultra)fast spreading rates tend to destabilize transform faults (Gerya, 2010; Püthe and Gerya, 2013) References Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T. (2012) Origin and models of oceanic transform faults. Tectonophys., 522-523, 34-56 Gerya, T.V. (2013a) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52. Gerya, T.V. (2013b) Initiation of transform faults at rifted continental margins: 3D petrological-thermomechanical modeling and comparison to the Woodlark Basin. Petrology, 21, 1-10. Püthe, C., Gerya, T.V. (2013) Dependence of mid-ocean ridge morphology on spreading rate in numerical 3-D models. Gondwana Res., DOI: http://dx.doi.org/10.1016/j.gr.2013.04.005 Taylor, B., Goodliffe, A., Martinez, F. (2009) Initiation of transform faults at rifted continental margins. Comptes Rendus Geosci., 341, 428-438.

  1. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  2. Progress on Fault Mechanisms for Gear Transmissions in Coal Cutting Machines: From Macro to Nano Models.

    PubMed

    Jiang, Yu; Zhang, Xiaogang; Zhang, Chao; Li, Zhixiong; Sheng, Chenxing

    2017-04-01

    Numerical modeling has been recognized as the dispensable tools for mechanical fault mechanism analysis. Techniques, ranging from macro to nano levels, include the finite element modeling boundary element modeling, modular dynamic modeling, nano dynamic modeling and so forth. This work firstly reviewed the progress on the fault mechanism analysis for gear transmissions from the tribological and dynamic aspects. Literature review indicates that the tribological and dynamic properties were separately investigated to explore the fault mechanism in gear transmissions. However, very limited work has been done to address the links between the tribological and dynamic properties and scarce researches have been done for coal cutting machines. For this reason, the tribo-dynamic coupled model was introduced to bridge the gap between the tribological and dynamic models in fault mechanism analysis for gear transmissions in coal cutting machines. The modular dynamic modeling and nano dynamic modeling techniques are expected to establish the links between the tribological and dynamic models. Possible future research directions using the tribo dynamic coupled model were summarized to provide potential references for researchers in the field.

  3. Strain Accumulation and Release of the Gorkha, Nepal, Earthquake (M w 7.8, 25 April 2015)

    NASA Astrophysics Data System (ADS)

    Morsut, Federico; Pivetta, Tommaso; Braitenberg, Carla; Poretti, Giorgio

    2017-08-01

    The near-fault GNSS records of strong-ground movement are the most sensitive for defining the fault rupture. Here, two unpublished GNSS records are studied, a near-fault-strong-motion station (NAGA) and a distant station in a poorly covered area (PYRA). The station NAGA, located above the Gorkha fault, sensed a southward displacement of almost 1.7 m. The PYRA station that is positioned at a distance of about 150 km from the fault, near the Pyramid station in the Everest, showed static displacements in the order of some millimeters. The observed displacements were compared with the calculated displacements of a finite fault model in an elastic halfspace. We evaluated two slips on fault models derived from seismological and geodetic studies: the comparison of the observed and modelled fields reveals that our displacements are in better accordance with the geodetic derived fault model than the seismologic one. Finally, we evaluate the yearly strain rate of four GNSS stations in the area that were recording continuously the deformation field for at least 5 years. The strain rate is then compared with the strain released by the Gorkha earthquake, leading to an interval of 235 years to store a comparable amount of elastic energy. The three near-fault GNSS stations require a slightly wider fault than published, in the case of an equivalent homogeneous rupture, with an average uniform slip of 3.5 m occurring on an area of 150 km × 60 km.

  4. Impact of device level faults in a digital avionic processor

    NASA Technical Reports Server (NTRS)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  5. Subsurface geometry and evolution of the Seattle fault zone and the Seattle Basin, Washington

    USGS Publications Warehouse

    ten Brink, Uri S.; Molzer, P.C.; Fisher, M.A.; Blakely, R.J.; Bucknam, R.C.; Parsons, T.; Crosson, R.S.; Creager, K.C.

    2002-01-01

    The Seattle fault, a large, seismically active, east-west-striking fault zone under Seattle, is the best-studied fault within the tectonically active Puget Lowland in western Washington, yet its subsurface geometry and evolution are not well constrained. We combine several analysis and modeling approaches to study the fault geometry and evolution, including depth-converted, deep-seismic-reflection images, P-wave-velocity field, gravity data, elastic modeling of shoreline uplift from a late Holocene earthquake, and kinematic fault restoration. We propose that the Seattle thrust or reverse fault is accompanied by a shallow, antithetic reverse fault that emerges south of the main fault. The wedge enclosed by the two faults is subject to an enhanced uplift, as indicated by the boxcar shape of the shoreline uplift from the last major earthquake on the fault zone. The Seattle Basin is interpreted as a flexural basin at the footwall of the Seattle fault zone. Basin stratigraphy and the regional tectonic history lead us to suggest that the Seattle fault zone initiated as a reverse fault during the middle Miocene, concurrently with changes in the regional stress field, to absorb some of the north-south shortening of the Cascadia forearc. Kingston Arch, 30 km north of the Seattle fault zone, is interpreted as a more recent disruption arising within the basin, probably due to the development of a blind reverse fault.

  6. Strike-Slip Fault Patterns on Europa: Obliquity or Polar Wander?

    NASA Technical Reports Server (NTRS)

    Rhoden, Alyssa Rose; Hurford, Terry A.; Manga, Michael

    2011-01-01

    Variations in diurnal tidal stress due to Europa's eccentric orbit have been considered as the driver of strike-slip motion along pre-existing faults, but obliquity and physical libration have not been taken into account. The first objective of this work is to examine the effects of obliquity on the predicted global pattern of fault slip directions based on a tidal-tectonic formation model. Our second objective is to test the hypothesis that incorporating obliquity can reconcile theory and observations without requiring polar wander, which was previously invoked to explain the mismatch found between the slip directions of 192 faults on Europa and the global pattern predicted using the eccentricity-only model. We compute predictions for individual, observed faults at their current latitude, longitude, and azimuth with four different tidal models: eccentricity only, eccentricity plus obliquity, eccentricity plus physical libration, and a combination of all three effects. We then determine whether longitude migration, presumably due to non-synchronous rotation, is indicated in observed faults by repeating the comparisons with and without obliquity, this time also allowing longitude translation. We find that a tidal model including an obliquity of 1.2?, along with longitude migration, can predict the slip directions of all observed features in the survey. However, all but four faults can be fit with only 1? of obliquity so the value we find may represent the maximum departure from a lower time-averaged obliquity value. Adding physical libration to the obliquity model improves the accuracy of predictions at the current locations of the faults, but fails to predict the slip directions of six faults and requires additional degrees of freedom. The obliquity model with longitude migration is therefore our preferred model. Although the polar wander interpretation cannot be ruled out from these results alone, the obliquity model accounts for all observations with a value consistent with theoretical expectations and cycloid modeling.

  7. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  8. Chasing the Garlock: A study of tectonic response to vertical axis rotation

    NASA Astrophysics Data System (ADS)

    Guest, Bernard; Pavlis, Terry L.; Golding, Heather; Serpa, Laura

    2003-06-01

    Vertical-axis, clockwise block rotations in the Northeast Mojave block are well documented by numerous authors. However, the effects of these rotations on the crust to the north of the Northeast Mojave block have remained unexplored. In this paper we present a model that results from mapping and geochronology conducted in the north and central Owlshead Mountains. The model suggests that some or all of the transtension and rotation observed in the Owlshead Mountains results from tectonic response to a combination of clockwise block rotation in the Northeast Mojave block and Basin and Range extension. The Owlshead Mountains are effectively an accommodation zone that buffers differential extension between the Northeast Mojave block and the Basin and Range. In addition, our model explores the complex interactions that occur between faults and fault blocks at the junction of the Garlock, Brown Mountain, and Owl Lake faults. We hypothesize that the bending of the Garlock fault by rotation of the Northeast Mojave block resulted in a misorientation of the Garlock that forced the Owl Lake fault to break in order to accommodate slip on the western Garlock fault. Subsequent sinistral slip on the Owl Lake fault offset the Garlock, creating the now possibly inactive Mule Springs strand of the Garlock fault. Dextral slip on the Brown Mountain fault then locked the Owl Lake fault, forcing the active Leach Lake strand of the Garlock fault to break.

  9. Structural and numerical modeling of fluid flow and evolving stress fields at a transtensional stepover: A Miocene Andean porphyry copper system as a case study.

    NASA Astrophysics Data System (ADS)

    Nuñez, R. C.; Griffith, W. A.; Mitchell, T. M.; Marquardt, C.; Iturrieta, P. C.; Cembrano, J. M.

    2017-12-01

    Obliquely convergent subduction orogens show both margin-parallel and margin-oblique fault systems that are spatially and temporally associated with ore deposits and geothermal systems within the volcanic arc. Fault orientation and mechanical interaction among different fault systems influence the stress field in these arrangements, thus playing a first order control on the regional to local-scale fluid migration paths as documented by the spatial distribution of fault-vein arrays. Our selected case study is a Miocene porphyry copper-type system that crops out in the precordillera of the Maule region along the Teno river Valley (ca. 35°S). Several regional to local faults were recognized in the field: (1) Two first-order, N-striking subvertical dextral faults overlapping at a right stepover; (2) Second-order, N60°E-striking steeply-dipping, dextral-normal faults located at the stepover, and (3) N40°-60°W striking subvertical, sinistral faults crossing the stepover zone. The regional and local scale geology is characterized by volcano-sedimentary rocks (Upper Eocene- Lower Miocene), intruded by Miocene granodioritic plutons (U-Pb zircon age of 18.2 ± 0.11 Ma) and coeval dikes. We implement a 2D boundary element displacement discontinuity method (BEM) model to test the mechanical feasibility of kinematic model of the structural development of the porphyry copper-type system in the stepover between N-striking faults. The model yields the stress field within the stepover region and shows slip and potential opening distribution along the N-striking master faults under a regionally imposed stress field. The model shows that σ1 rotates clockwise where the main faults approach each other, becoming EW when they overlap. This, in turn leads to the generation of both NE- and NW-striking faults within the stepover area. Model results are consistent with the structural and kinematic data collected in the field attesting for enhanced permeability and fluid flow transport and arrest spatially associated with the stepover.

  10. Daily estimates of the migrating tide and zonal mean temperature in the mesosphere and lower thermosphere derived from SABER data

    NASA Astrophysics Data System (ADS)

    Ortland, David A.

    2017-04-01

    Satellites provide a global view of the structure in the fields that they measure. In the mesosphere and lower thermosphere, the dominant features in these fields at low zonal wave number are contained in the zonal mean, quasi-stationary planetary waves, and tide components. Due to the nature of the satellite sampling pattern, stationary, diurnal, and semidiurnal components are aliased and spectral methods are typically unable to separate the aliased waves over short time periods. This paper presents a data processing scheme that is able to recover the daily structure of these waves and the zonal mean state. The method is validated by using simulated data constructed from a mechanistic model, and then applied to Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperature measurements. The migrating diurnal tide extracted from SABER temperatures for 2009 has a seasonal variability with peak amplitude (20 K at 95 km) in February and March and minimum amplitude (less than 5 K at 95 km) in early June and early December. Higher frequency variability includes a change in vertical structure and amplitude during the major stratospheric warming in January. The migrating semidiurnal tide extracted from SABER has variability on a monthly time scale during January through March, minimum amplitude in April, and largest steady amplitudes from May through September. Modeling experiments were performed that show that much of the variability on seasonal time scales in the migrating tides is due to changes in the mean flow structure and the superposition of the tidal responses to water vapor heating in the troposphere and ozone heating in the stratosphere and lower mesosphere.

  11. Artificial Neural Network Based Fault Diagnostics of Rotating Machinery Using Wavelet Transforms as a Preprocessor

    NASA Astrophysics Data System (ADS)

    Paya, B. A.; Esat, I. I.; Badi, M. N. M.

    1997-09-01

    The purpose of condition monitoring and fault diagnostics are to detect and distinguish faults occurring in machinery, in order to provide a significant improvement in plant economy, reduce operational and maintenance costs and improve the level of safety. The condition of a model drive-line, consisting of various interconnected rotating parts, including an actual vehicle gearbox, two bearing housings, and an electric motor, all connected via flexible couplings and loaded by a disc brake, was investigated. This model drive-line was run in its normal condition, and then single and multiple faults were introduced intentionally to the gearbox, and to the one of the bearing housings. These single and multiple faults studied on the drive-line were typical bearing and gear faults which may develop during normal and continuous operation of this kind of rotating machinery. This paper presents the investigation carried out in order to study both bearing and gear faults introduced first separately as a single fault and then together as multiple faults to the drive-line. The real time domain vibration signals obtained for the drive-line were preprocessed by wavelet transforms for the neural network to perform fault detection and identify the exact kinds of fault occurring in the model drive-line. It is shown that by using multilayer artificial neural networks on the sets of preprocessed data by wavelet transforms, single and multiple faults were successfully detected and classified into distinct groups.

  12. On-line diagnosis of inter-turn short circuit fault for DC brushed motor.

    PubMed

    Zhang, Jiayuan; Zhan, Wei; Ehsani, Mehrdad

    2018-06-01

    Extensive research effort has been made in fault diagnosis of motors and related components such as winding and ball bearing. In this paper, a new concept of inter-turn short circuit fault for DC brushed motors is proposed to include the short circuit ratio and short circuit resistance. A first-principle model is derived for motors with inter-turn short circuit fault. A statistical model based on Hidden Markov Model is developed for fault diagnosis purpose. This new method not only allows detection of motor winding short circuit fault, it can also provide estimation of the fault severity, as indicated by estimation of the short circuit ratio and the short circuit resistance. The estimated fault severity can be used for making appropriate decisions in response to the fault condition. The feasibility of the proposed methodology is studied for inter-turn short circuit of DC brushed motors using simulation in MATLAB/Simulink environment. In addition, it is shown that the proposed methodology is reliable with the presence of small random noise in the system parameters and measurement. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.

  14. Development of the self-learning machine for creating models of microprocessor of single-phase earth fault protection devices in networks with isolated neutral voltage above 1000 V

    NASA Astrophysics Data System (ADS)

    Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.

    2018-02-01

    The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.

  15. Modulation transfer function cascade model for a sampled IR imaging system.

    PubMed

    de Luca, L; Cardone, G

    1991-05-01

    The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.

  16. A grid-doubling finite-element technique for calculating dynamic three-dimensional spontaneous rupture on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael

    2009-01-01

    We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.

  17. Evolution of fault zones in carbonates with mechanical stratigraphy - Insights from scale models using layered cohesive powder

    NASA Astrophysics Data System (ADS)

    van Gent, Heijn W.; Holland, Marc; Urai, Janos L.; Loosveld, Ramon

    2010-09-01

    We present analogue models of the formation of dilatant normal faults and fractures in carbonate fault zones, using cohesive hemihydrate powder (CaSO 4·½H 2O). The evolution of these dilatant fault zones involves a range of processes such as fragmentation, gravity-driven breccia transport and the formation of dilatant jogs. To allow scaling to natural prototypes, extensive material characterisation was done. This showed that tensile strength and cohesion depend on the state of compaction, whereas the friction angle remains approximately constant. In our models, tensile strength of the hemihydrate increases with depth from 9 to 50 Pa, while cohesion increases from 40 to 250 Pa. We studied homogeneous and layered material sequences, using sand as a relatively weak layer and hemihydrate/graphite mixtures as a slightly stronger layer. Deformation was analyzed by time-lapse photography and Particle Image Velocimetry (PIV) to calculate the evolution of the displacement field. With PIV the initial, predominantly elastic deformation and progressive localization of deformation are observed in detail. We observed near-vertical opening-mode fractures near the surface. With increasing depth, dilational shear faults were dominant, with releasing jogs forming at fault-dip variations. A transition to non-dilatant shear faults was observed near the bottom of the model. In models with mechanical stratigraphy, fault zones are more complex. The inferred stress states and strengths in different parts of the model agree with the observed transitions in the mode of deformation.

  18. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  19. Tectonic stressing in California modeled from GPS observations

    USGS Publications Warehouse

    Parsons, T.

    2006-01-01

    What happens in the crust as a result of geodetically observed secular motions? In this paper we find out by distorting a finite element model of California using GPS-derived displacements. A complex model was constructed using spatially varying crustal thickness, geothermal gradient, topography, and creeping faults. GPS velocity observations were interpolated and extrapolated across the model and boundary condition areas, and the model was loaded according to 5-year displacements. Results map highest differential stressing rates in a 200-km-wide band along the Pacific-North American plate boundary, coinciding with regions of greatest seismic energy release. Away from the plate boundary, GPS-derived crustal strain reduces modeled differential stress in some places, suggesting that some crustal motions are related to topographic collapse. Calculated stressing rates can be resolved onto fault planes: useful for addressing fault interactions and necessary for calculating earthquake advances or delays. As an example, I examine seismic quiescence on the Garlock fault despite a calculated minimum 0.1-0.4 MPa static stress increase from the 1857 M???7.8 Fort Tejon earthquake. Results from finite element modeling show very low to negative secular Coulomb stress growth on the Garlock fault, suggesting that the stress state may have been too low for large earthquake triggering. Thus the Garlock fault may only be stressed by San Andreas fault slip, a loading pattern that could explain its erratic rupture history.

  20. Dislocation model for aseismic fault slip in the transverse ranges of Southern California

    NASA Technical Reports Server (NTRS)

    Cheng, A.; Jackson, D. D.; Matsuura, M.

    1985-01-01

    Geodetic data at a plate boundary can reveal the pattern of subsurface displacements that accompany plate motion. These displacements are modelled as the sum of rigid block motion and the elastic effects of frictional interaction between blocks. The frictional interactions are represented by uniform dislocation on each of several rectangular fault patches. The block velocities and fault parameters are then estimated from geodetic data. Bayesian inversion procedure employs prior estimates based on geological and seismological data. The method is applied to the Transverse Ranges, using prior geological and seismological data and geodetic data from the USGS trilateration networks. Geodetic data imply a displacement rate of about 20 mm/yr across the San Andreas Fault, while the geologic estimates exceed 30 mm/yr. The prior model and the final estimates both imply about 10 mm/yr crustal shortening normal to the trend of the San Andreas Fault. Aseismic fault motion is a major contributor to plate motion. The geodetic data can help to identify faults that are suffering rapid stress accumulation; in the Transverse Ranges those faults are the San Andreas and the Santa Susana.

  1. A formally verified algorithm for interactive consistency under a hybrid fault model

    NASA Technical Reports Server (NTRS)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  2. Nitsche Extended Finite Element Methods for Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Coon, Ethan T.

    Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event ensembles. Finally, this thesis is a proof of concept for the XFEM on earthquake cycle models on fault systems. We identify strengths and weaknesses of the method, and identify places for future improvement. We discuss the feasibility of the method's use in three dimensions, and find the method to be a strong candidate for future crustal deformation simulations.

  3. Effects of faults as barriers or conduits to displaced brine flow on a putative CO2 storage site in the Southern North Sea

    NASA Astrophysics Data System (ADS)

    Hannis, Sarah; Bricker, Stephanie; Williams, John

    2013-04-01

    The Bunter Sandstone Formation in the Southern North Sea is a potential reservoir being considered for carbon dioxide storage as a climate change mitigation option. A geological model of a putative storage site within this saline aquifer was built from 3D seismic and well data to investigate potential reservoir pressure changes and their effects on fault movement, brine and CO2 migration as a result of CO2 injection. The model is located directly beneath the Dogger Bank Special Area of Conservation, close to the UK-Netherlands median line. Analysis of the seismic data reveals two large fault zones, one in each of the UK and Netherlands sectors, many tens of kilometres in length, extending from reservoir level to the sea bed. Although it has been shown that similar faults compartmentalise gas fields elsewhere in the Netherlands sector, significant uncertainty remains surrounding the properties of the faults in our model area; in particular their cross- and along-fault permeability and geomechanical behaviour. Despite lying outside the anticipated CO2 plume, these faults could provide potential barriers to pore fluid migration and pressure dissipation, until, under elevated pressures, they provide vertical migration pathways for brine. In this case, the faults will act to enhance injectivity, but potential environmental impacts, should the displaced brine be expelled at the sea bed, will require consideration. Pressure gradients deduced from regional leak-off test data have been input into a simple geomechanical model to estimate the threshold pressure gradient at which faults cutting the Mesozoic succession will fail, assuming reactivation of fault segments will cause an increase in vertical permeability. Various 4D scenarios were run using a single-phase groundwater modelling code, calibrated to results from a multi-phase commercial simulator. Possible end-member ranges of fault parameters were input to investigate the pressure change with time and quantify brine flux to the seabed in potentially reactivated sections of each fault zone. Combining the modelled pressure field with the calculated fault failure criterion suggests that only the fault in the Netherlands sector reactivates, allowing brine displacement at a maximum rate of 800 - 900 m3/d. Model results indicate that the extent of brine displacement is most sensitive to the fault reactivation pressure gradient and fault zone thickness. In conclusion, CO2 injection into a saline aquifer results in a significant increase in pore-fluid pressure gradients. In this case, brine displacement along faults acting as pressure relief valves could increase injectivity in a similar manner to pressure management wells, thereby facilitating the storage operation. However, if the faults act as brine migration pathways, an understanding of seabed flux rates and environmental impacts will need to be demonstrated to regulators prior to injection. This study, close to an international border, also highlights the need to inform neighbouring countries authorities of proposed operations and, potentially, to obtain licences to increase reservoir pressure and/or displace brine across international borders.

  4. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  5. Cosine beamforming

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Wapenaar, Kees

    2014-05-01

    In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.

  6. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  7. Velocity Gradient Across the San Andreas Fault and Changes in Slip Behavior as Outlined by Full non Linear Tomography

    NASA Astrophysics Data System (ADS)

    Chiarabba, C.; Giacomuzzi, G.; Piana Agostinetti, N.

    2017-12-01

    The San Andreas Fault (SAF) near Parkfield is the best known fault section which exhibit a clear transition in slip behavior from stable to unstable. Intensive monitoring and decades of studies permit to identify details of these processes with a good definition of fault structure and subsurface models. Tomographic models computed so far revealed the existence of large velocity contrasts, yielding physical insight on fault rheology. In this study, we applied a recently developed full non-linear tomography method to compute Vp and Vs models which focus on the section of the fault that exhibit fault slip transition. The new tomographic code allows not to impose a vertical seismic discontinuity at the fault position, as routinely done in linearized codes. Any lateral velocity contrast found is directly dictated by the data themselves and not imposed by subjective choices. The use of the same dataset of previous tomographic studies allows a proper comparison of results. We use a total of 861 earthquakes, 72 blasts and 82 shots and the overall arrival time dataset consists of 43948 P- and 29158 S-wave arrival times, accurately selected to take care of seismic anisotropy. Computed Vp and Vp/Vs models, which by-pass the main problems related to linarized LET algorithms, excellently match independent available constraints and show crustal heterogeneities with a high resolution. The high resolution obtained in the fault surroundings permits to infer lateral changes of Vp and Vp/Vs across the fault (velocity gradient). We observe that stable and unstable sliding sections of the SAF have different velocity gradients, small and negligible in the stable slip segment, but larger than 15 % in the unstable slip segment. Our results suggest that Vp and Vp/Vs gradients across the fault control fault rheology and the attitude of fault slip behavior.

  8. Tidal Fluctuations in a Deep Fault Extending Under the Santa Barbara Channel, California

    NASA Astrophysics Data System (ADS)

    Garven, G.; Stone, J.; Boles, J. R.

    2013-12-01

    Faults are known to strongly affect deep groundwater flow, and exert a profound control on petroleum accumulation, migration, and natural seafloor seepage from coastal reservoirs within the young sedimentary basins of southern California. In this paper we focus on major fault structure permeability and compressibility in the Santa Barbara Basin, where unique submarine and subsurface instrumentation provide the hydraulic characterization of faults in a structurally complex system. Subsurface geologic logs, geophysical logs, fluid P-T-X data, seafloor seep discharge patterns, fault mineralization petrology, isotopic data, fluid inclusions, and structural models help characterize the hydrogeological nature of faults in this seismically-active and young geologic terrain. Unique submarine gas flow data from a natural submarine seep area of the Santa Barbara Channel help constrain fault permeability k ~ 30 millidarcys for large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical models from classic hydrologic papers by Jacob-Ferris-Bredehoeft-van der Kamp-Wang can be used to extract large-scale fault permeability and compressibility parameters, based on tidal signal amplitude attenuation and phase shift at depth. For the South Ellwood Fault, we estimate k ~ 38 millidarcys (hydraulic conductivity K~ 3.6E-07 m/s) and specific storage coefficient Ss ~ 5.5E-08 m-1. The tidal-derived hydraulic properties also suggest a low effective porosity for the fault zone, n ~ 1 to 3%. Results of forward modeling with 2-D finite element models illustrate significant lateral propagation of the tidal signal into highly-permeable Monterey Formation. The results have important practical implications for fault characterization, petroleum migration, structural diagenesis, and carbon sequestration.

  9. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  10. Strike-slip faulting in the Inner California Borderlands, offshore Southern California.

    NASA Astrophysics Data System (ADS)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.

    2015-12-01

    In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.

  11. Wastewater injection and slip triggering: Results from a 3D coupled reservoir/rate-and-state model

    NASA Astrophysics Data System (ADS)

    Babazadeh, M.; Olson, J. E.; Schultz, R.

    2017-12-01

    Seismicity induced by fluid injection is controlled by parameters related to injection conditions, reservoir properties, and fault frictional behavior. We present results from a combined model that brings together injection physics, reservoir dynamics, and fault physics to better explain the primary controls on induced seismicity. We created a 3D fluid flow simulator using the embedded discrete fracture technique and then coupled it with a 3D displacement discontinuity model that uses rate and state friction to model slip events. The model is composed of three layers, including the top-seal, the injection reservoir, and the basement. Permeability is anisotropic (vertical vs horizontal) and along with porosity varies by layer. Injection control can be either rate or pressure. Fault properties include size, 2D permeability, and frictional properties. Several suites of simulations were run to evaluate the relative importance of each of the factors from all three parameter groups. We find that the injection parameters interact with the reservoir parameters in the context of the fault physics and these relations change for different reservoir and fault characteristics, leading to the need to examine the injection parameters only within the context of a particular faulted reservoir. For a reservoir with no flow boundaries, low permeability (5 md), and a fault with high fault-parallel permeability and critical stress, injection rate exerts the strongest control on magnitude and frequency of earthquakes. However, for a higher permeability reservoir (80 md), injection volume becomes the more important factor. Fault permeability structure is a key factor in inducing earthquakes in basement rocks below the injection reservoir. The initial failure state of the fault, which is challenging to assess, can have a big effect on the size and timing of events. For a fault 2 MPa below critical state, we were able to induce a slip event, but it occurred late in the injection history and was limited to a subset of the fault extent. A case starting at critical stress resulted in a rupture that propagated throughout the entire physical extent of the fault generated a larger magnitude earthquake. This physics-based model can contribute to assessing the risk associated with injection activities and providing guidelines for hazard mitigation.

  12. The 2014 update to the National Seismic Hazard Model in California

    USGS Publications Warehouse

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  13. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  14. A new model for the initiation, crustal architecture, and extinction of pull-apart basins

    NASA Astrophysics Data System (ADS)

    van Wijk, J.; Axen, G. J.; Abera, R.

    2015-12-01

    We present a new model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal numerical models of deformation, field observations, and fault theory, and answers many of the outstanding questions related to these rifts. In our model, geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step which results from early geometry of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement and the fault tips may propagate beyond the rift basin. Our model predicts that the sediment source areas may thus migrate over time. This implies that, although pull-apart basins lengthen over time, lengthening is accommodated by extension within the pull-apart basin, rather than formation of new faults outside of the rift zone. In this aspect pull-apart basins behave as narrow rifts: with increasing strike-slip the basins deepen but there is no significant younging outward. We explain why pull-apart basins do not go through previously proposed geometric evolutionary stages, which has not been documented in nature. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because strike-slip systems tend to straighten. The model predicts what the favorable step-dimensions are for the formation of such a fault system, and those for which a pull-apart basin may further develop into a short seafloor-spreading ridge. The model also shows that rift shoulder uplift is enhanced if the strike-slip rate is larger than the fault-propagation rate. Crustal compression then contributes to uplift of the rift flanks.

  15. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    NASA Astrophysics Data System (ADS)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  16. QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.

    2011-12-01

    The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.

  17. A kinematic model for the evolution of the Eastern California Shear Zone and Garlock Fault, Mojave Desert, California

    NASA Astrophysics Data System (ADS)

    Dixon, Timothy H.; Xie, Surui

    2018-07-01

    The Eastern California shear zone in the Mojave Desert, California, accommodates nearly a quarter of Pacific-North America plate motion. In south-central Mojave, the shear zone consists of six active faults, with the central Calico fault having the fastest slip rate. However, faults to the east of the Calico fault have larger total offsets. We explain this pattern of slip rate and total offset with a model involving a crustal block (the Mojave Block) that migrates eastward relative to a shear zone at depth whose position and orientation is fixed by the Coachella segment of the San Andreas fault (SAF), southwest of the transpressive "big bend" in the SAF. Both the shear zone and the Garlock fault are assumed to be a direct result of this restraining bend, and consequent strain redistribution. The model explains several aspects of local and regional tectonics, may apply to other transpressive continental plate boundary zones, and may improve seismic hazard estimates in these zones.

  18. The western limits of the Seattle fault zone and its interaction with the Olympic Peninsula, Washington

    USGS Publications Warehouse

    A.P. Lamb,; L.M. Liberty,; Blakely, Richard J.; Pratt, Thomas L.; Sherrod, B.L.; Van Wijk, K.

    2012-01-01

    We present evidence that the Seattle fault zone of Washington State extends to the west edge of the Puget Lowland and is kinemati-cally linked to active faults that border the Olympic Massif, including the Saddle Moun-tain deformation zone. Newly acquired high-resolution seismic reflection and marine magnetic data suggest that the Seattle fault zone extends west beyond the Seattle Basin to form a >100-km-long active fault zone. We provide evidence for a strain transfer zone, expressed as a broad set of faults and folds connecting the Seattle and Saddle Mountain deformation zones near Hood Canal. This connection provides an explanation for the apparent synchroneity of M7 earthquakes on the two fault systems ~1100 yr ago. We redefi ne the boundary of the Tacoma Basin to include the previously termed Dewatto basin and show that the Tacoma fault, the southern part of which is a backthrust of the Seattle fault zone, links with a previously unidentifi ed fault along the western margin of the Seattle uplift. We model this north-south fault, termed the Dewatto fault, along the western margin of the Seattle uplift as a low-angle thrust that initiated with exhu-mation of the Olympic Massif and today accommodates north-directed motion. The Tacoma and Dewatto faults likely control both the southern and western boundaries of the Seattle uplift. The inferred strain trans-fer zone linking the Seattle fault zone and Saddle Mountain deformation zone defi nes the northern margin of the Tacoma Basin, and the Saddle Mountain deformation zone forms the northwestern boundary of the Tacoma Basin. Our observations and model suggest that the western portions of the Seattle fault zone and Tacoma fault are com-plex, require temporal variations in principal strain directions, and cannot be modeled as a simple thrust and/or backthrust system.

  19. Interaction Behavior between Thrust Faulting and the National Highway No. 3 - Tianliao III bridge as Determined using Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C. H.; Wu, L. C.; Chan, P. C.; Lin, M. L.

    2016-12-01

    The National Highway No. 3 - Tianliao III Bridge is located in the southwestern Taiwan mudstone area and crosses the Chekualin fault. Since the bridge was opened to traffic, it has been repaired 11 times. To understand the interaction behavior between thrust faulting and the bridge, a discrete element method-based software program, PFC, was applied to conduct a numerical analysis. A 3D model for simulating the thrust faulting and bridge was established, as shown in Fig. 1. In this conceptual model, the length and width were 50 and 10 m, respectively. Part of the box bottom was moveable, simulating the displacement of the thrust fault. The overburden stratum had a height of 5 m with fault dip angles of 20° (Fig. 2). The bottom-up strata were mudstone, clay, and sand, separately. The uplift was 1 m, which was 20% of the stratum thickness. In accordance with the investigation, the position of the fault tip was set, depending on the fault zone, and the bridge deformation was observed (Fig. 3). By setting "Monitoring Balls" in the numerical model to analyzes bridge displacement, we determined that the bridge deck deflection increased as the uplift distance increased. Furthermore, the force caused by the loading of the bridge deck and fault dislocation was determined to cause a down deflection of the P1 and P2 bridge piers. Finally, the fault deflection trajectory of the P4 pier displayed the maximum displacement (Fig. 4). Similar behavior has been observed through numerical simulation as well as field monitoring data. Usage of the discrete element model (PFC3D) to simulate the deformation behavior between thrust faulting and the bridge provided feedback for the design and improved planning of the bridge.

  20. Active tectonics of the Imperial Valley, southern California: fault damage zones, complex basins and buried faults

    NASA Astrophysics Data System (ADS)

    Persaud, P.; Ma, Y.; Stock, J. M.; Hole, J. A.; Fuis, G. S.; Han, L.

    2016-12-01

    Ongoing oblique slip at the Pacific-North America plate boundary in the Salton Trough produced the Imperial Valley. Deformation in this seismically active area is distributed across a complex network of exposed and buried faults resulting in a largely unmapped seismic hazard beneath the growing population centers of El Centro, Calexico and Mexicali. To better understand the shallow crustal structure in this region and the connectivity of faults and seismicity lineaments, we used data primarily from the Salton Seismic Imaging Project (SSIP) to construct a P-wave velocity profile to 15 km depth, and a 3-D velocity model down to 8 km depth including the Brawley Geothermal area. We obtained detailed images of a complex wedge-shaped basin at the southern end of the San Andreas Fault system. Two deep subbasins (VP <5.65 km/s) are located in the western part of the larger Imperial Valley basin, where seismicity trends and active faults play a significant role in shaping the basin edge. Our 3-D VP model reveals previously unrecognized NE-striking cross faults that are interacting with the dominant NW-striking faults to control deformation. New findings in our profile include localized regions of low VP (thickening of a 5.65-5.85 km/s layer) near faults or seismicity lineaments interpreted as possibly faulting-related. Our 3-D model and basement map reveal velocity highs associated with the geothermal areas in the eastern valley. The improved seismic velocity model from this study, and the identification of important unmapped faults or buried interfaces will help refine the seismic hazard for parts of Imperial County, California.

  1. Evolving transpressional strain fields along the San Andreas fault in southern California: implications for fault branching, fault dip segmentation and strain partitioning

    NASA Astrophysics Data System (ADS)

    Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil

    2014-05-01

    The San Andreas fault in southern California records only few large-magnitude earthquakes in historic time, and the recent activity is confined primarily on irregular and discontinuous strike-slip and thrust fault strands at shallow depths of ~5-20 km. Despite this fact, slip along the San Andreas fault is calculated to c. 35 mm/yr based on c.160 km total right lateral displacement for the southern segment of the fault in the last c. 8 Ma. Field observations also reveal complex fault strands and multiple events of deformation. The presently diffuse high-magnitude crustal movements may be explained by the deformation being largely distributed along more gently dipping reverse faults in fold-thrust belts, in contrast to regions to the north where deformation is less partitioned and localized to narrow strike-slip fault zones. In the Mecca Hills of the Salton trough transpressional deformation of an uplifted segment of the San Andreas fault in the last ca. 4.0 My is expressed by very complex fault-oblique and fault-parallel (en echelon) folding, and zones of uplift (fold-thrust belts), basement-involved reverse and strike-slip faults and accompanying multiple and pervasive cataclasis and conjugate fracturing of Miocene to Pleistocene sedimentary strata. Our structural analysis of the Mecca Hills addresses the kinematic nature of the San Andreas fault and mechanisms of uplift and strain-stress distribution along bent fault strands. The San Andreas fault and subsidiary faults define a wide spectrum of kinematic styles, from steep localized strike-slip faults, to moderate dipping faults related to oblique en echelon folds, and gently dipping faults distributed in fold-thrust belt domains. Therefore, the San Andreas fault is not a through-going, steep strike-slip crustal structure, which is commonly the basis for crustal modeling and earthquake rupture models. The fault trace was steep initially, but was later multiphase deformed/modified by oblique en echelon folding, renewed strike-slip movements and contractile fold-thrust belt structures. Notably, the strike-slip movements on the San Andreas fault were transformed outward into the surrounding rocks as oblique-reverse faults to link up with the subsidiary Skeleton Canyon fault in the Mecca Hills. Instead of a classic flower structure model for this transpressional uplift, the San Andreas fault strands were segmented into domains that record; (i) early strike-slip motion, (ii) later oblique shortening with distributed deformation (en echelon fold domains), followed by (iii) localized fault-parallel deformation (strike-slip) and (iv) superposed out-of-sequence faulting and fault-normal, partitioned deformation (fold-thrust belt domains). These results contribute well to the question if spatial and temporal fold-fault branching and migration patterns evolving along non-vertical strike-slip fault segments can play a role in the localization of earthquakes along the San Andreas fault.

  2. Artificial neural network application for space station power system fault diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.

    1995-01-01

    This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.

  3. Using Magnetics and Topography to Model Fault Splays of the Hilton Creek Fault System within the Long Valley Caldera

    NASA Astrophysics Data System (ADS)

    De Cristofaro, J. L.; Polet, J.

    2017-12-01

    The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques derived from imagery acquired by an unmanned aerial vehicle and ground control points measured with realtime kinematic GPS receivers. This terrain model will be combined with subsurface geophysical data to form a comprehensive model of the subsurface.

  4. Finite element models of earthquake cycles in mature strike-slip fault zones

    NASA Astrophysics Data System (ADS)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a significant roll in the variability of earthquake repeat times. Specifically, small perturbations in the model parameters can lead to results similar to such observed phenomena as earthquake clustering and disruptions to so-called "characteristic" earthquake cycles.

  5. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    PubMed Central

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  6. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  7. Testing Pixel Translation Digital Elevation Models to Reconstruct Slip Histories: An Example from the Agua Blanca Fault, Baja California, Mexico

    NASA Astrophysics Data System (ADS)

    Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.

    2012-12-01

    We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is approximately equal to that to the east. The ABF has varying kinematics along strike due to changes in trend of the fault with respect to the nearly east-trending displacement vector of the Ensenada Block to the north of the fault relative to a stable Baja Microplate to the south. These kinematics include nearly pure strike slip in the central portion of the ABF where the fault trends nearly E-W, and minor components of normal dip-slip motion on the NABF and eastern sections of the fault where the trends become more northerly. A pixel translation vector parallel to the trend of the ABF in the central segment (290 deg, 10.5 km) produces kinematics consistent with those described above. The block between the NABF and STF has a pixel translation vector parallel the STF (291 deg, 3.5 km). We find these vectors are consistent with the kinematic variability of the fault system and realign several major drainages and ridges across the fault. This suggests these features formed prior to faulting, and they yield preferred values of offset: 10.5 km on the ABF, 7 km on the NABF and 3.5 km on the STF. This model is consistent with the kinematic model proposed by Hamilton (1971) in which the ABF is a transform fault, linking extensional regions of Valle San Felipe and the Continental Borderlands.

  8. The hydraulic structure of the Gole Larghe Fault Zone (Italian Southern Alps) through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2017-12-01

    The 600 m-thick, strike slip Gole Larghe Fault Zone (GLFZ) experienced several hundred seismic slip events at c. 8 km depth, well-documented by numerous pseudotachylytes, was then exhumed and is now exposed in beautiful and very continuous outcrops. The fault zone was also characterized by hydrous fluid flow during the seismic cycle, demonstrated by alteration halos and precipitation of hydrothermal minerals in veins and cataclasites. We have characterized the GLFZ with > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed us obtaining 3D Discrete Fracture Network (DFN) models, based on robust probability density functions for parameters of fault and fracture sets, and simulating the fault zone hydraulic properties. In addition, the correlation between evidences of fluid flow and the fault/fracture network parameters have been studied with a geostatistical approach, allowing generating more realistic time-varying permeability models of the fault zone. Based on this dataset, we have developed a FEM hydraulic model of the GLFZ for a period of some tens of years, covering one seismic event and a postseismic period. The higher permeability is attained in the syn- to early post-seismic period, when fractures are (re)opened by off-fault deformation, then permeability decreases in the postseismic due to fracture sealing. The flow model yields a flow pattern consistent with the observed alteration/mineralization pattern and a marked channelling of fluid flow in the inner part of the fault zone, due to permeability anisotropy related to the spatial arrangement of different fracture sets. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, and the heterogeneity and evolution of mechanical parameters due to fluid-rock interaction.

  9. Impact of different detachment topographies on pull-apart basin evolution - analog modelling and computer visualisation

    NASA Astrophysics Data System (ADS)

    Hoprich, M.; Decker, K.; Grasemann, B.; Sokoutis, D.; Willingshofer, E.

    2009-04-01

    Former analog modeling on pull-apart basins dealt with different sidestep geometries, the symmetry and ratio between velocities of moving blocks, the ratio between ductile base and model thickness, the ratio between fault stepover and model thickness and their influence on basin evolution. In all these models the pull-apart basin is deformed over an even detachment. The Vienna basin, however, is considered a classical thin-skinned pull-apart with a rather peculiar basement structure. Deformation and basin evolution are believed to be limited to the brittle upper crust above the Alpine-Carpathian floor thrust. The latter is not a planar detachment surface, but has a ramp-shaped topography draping the underlying former passive continental margin. In order to estimate the effects of this special geometry, nine experiments were accomplished and the resulting structures were compared with the Vienna basin. The key parameters for the models (fault and basin geometry, detachment depth and topography) were inferred from a 3D GoCad model of the natural Vienna basin, which was compiled from seismic, wells and geological cross sections. The experiments were scaled 1:100.000 ("Ramberg-scaling" for brittle rheology) and built of quartz sand (300 µm grain size). An average depth of 6 km (6 cm) was calculated for the basal detachment, distances between the bounding strike-slip faults of 40 km (40 cm) and a finite length of the natural basin of 200 km were estimated (initial model length: 100 cm). The following parameters were changed through the experimental process: (1) syntectonic sedimentation; (2) the stepover angle between bounding strike slip faults and basal velocity discontinuity; (3) moving of one or both fault blocks (producing an asymmetrical or symmetrical basin); (4) inclination of the basal detachment surface by 5°; (6) installation of 2 and 3 ramp systems at the detachment; (7) simulation of a ductile detachment through a 0.4 cm thick PDMS layer at the basin floor. The surface of the model was photographed after each deformation increment through the experiment. Pictures of serial cross sections cut through the models in their final state every 4 cm were also taken and interpreted. The formation of en-echelon normal faults with relay ramps is observed in all models. These faults are arranged in an acute angle to the basin borders, according to a Riedel-geometry. In the case of an asymmetric basin they emerge within the non-moving fault block. Substantial differences between the models are the number, the distance and the angle of these Riedel faults, the length of the bounding strike-slip faults and the cross basin symmetry. A flat detachment produces straight fault traces, whereas inclined detachments (or inclined ramps) lead to "bending" of the normal faults, rollover and growth strata thickening towards the faults. Positions and the sizes of depocenters also vary, with depocenters preferably developing above ramp-flat-transitions. Depocenter thicknesses increase with ramp heights. A similar relation apparently exists in the natural Vienna basin, which shows ramp-like structures in the detachment just underneath large faults like the Steinberg normal fault and the associated depocenters. The 3-ramp-model also reveals segmentation of the basin above the lowermost ramp. The evolving structure is comparable to the Wiener Neustadt sub-basin in the southern part of the Vienna basin, which is underlain by a topographical high of the detachment. Cross sections through the ductile model show a strong disintergration into a horst-and-graben basin. The thin silicon putty base influences the overlying strata in a way that the basin - unlike the "dry" sand models - becomes very flat and shallow. The top view shows an irregular basin shape and no rhombohedral geometry, which characterises the Vienna basin. The ductile base also leads to a symmetrical distribution of deformation on both fault blocks, even though only one fault block is moved. The stepover angle, the influence of gravitation in a ramp or inclined system and the strain accomodation by a viscous silicone layer can be summarized as factors controlling the characteristics of the models.

  10. Using the 3D active fault model to estimate the surface deformation, a study on HsinChu area, Taiwan.

    NASA Astrophysics Data System (ADS)

    Lin, Y. K.; Ke, M. C.; Ke, S. S.

    2016-12-01

    An active fault is commonly considered to be active if they have moved one or more times in the last 10,000 years and likely to have another earthquake sometime in the future. The relationship between the fault reactivation and the surface deformation after the Chi-Chi earthquake (M=7.2) in 1999 has been concerned up to now. According to the investigations of well-known disastrous earthquakes in recent years, indicated that surface deformation is controlled by the 3D fault geometric shape. Because the surface deformation may cause dangerous damage to critical infrastructures, buildings, roads, power, water and gas lines etc. Therefore it's very important to make pre-disaster risk assessment via the 3D active fault model to decrease serious economic losses, people injuries and deaths caused by large earthquake. The approaches to build up the 3D active fault model can be categorized as (1) field investigation (2) digitized profile data and (3) build the 3D modeling. In this research, we tracked the location of the fault scarp in the field first, then combined the seismic profiles (had been balanced) and historical earthquake data to build the underground fault plane model by using SKUA-GOCAD program. Finally compared the results come from trishear model (written by Richard W. Allmendinger, 2012) and PFC-3D program (Itasca) and got the calculated range of the deformation area. By analysis of the surface deformation area made from Hsin-Chu Fault, we concluded the result the damage zone is approaching 68 286m, the magnitude is 6.43, the offset is 0.6m. base on that to estimate the population casualties, building damage by the M=6.43 earthquake in Hsin-Chu area, Taiwan. In the future, in order to be applied accurately on earthquake disaster prevention, we need to consider further the groundwater effect and the soil structure interaction inducing by faulting.

  11. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  12. The role of thin, mechanical discontinuities on the propagation of reverse faults: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2016-04-01

    Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation with respect to the faulting pattern and evolution. Secondly, using MOVE software, we extracted the positions of fault tips and folds every 5 mm of displacement on the master fault. Analyzing these positions in all of the experiments, we found that the growth rate of the faults and the related fold shape vary depending on the number of discontinuities in the clay medium. Other results can be summarized as follows: 1) the fault growth rate is not constant, but varies especially while the new faults interacts with precuts; 2) the new faults tend to crosscut the discontinuities when the angle between them is approximately 90°; 3) the trishear zone change its shape during the experiments especially when the main fault interacts with the discontinuities.

  13. Transient cnoidal waves explain the formation and geometry of fault damage zones

    NASA Astrophysics Data System (ADS)

    Veveakis, Manolis; Schrank, Christoph

    2017-04-01

    The spatial footprint of a brittle fault is usually dominated by a wide area of deformation bands and fractures surrounding a narrow, highly deformed fault core. This diffuse damage zone relates to the deformation history of a fault, including its seismicity, and has a significant impact on flow and mechanical properties of faulted rock. Here, we propose a new mechanical model for damage-zone formation. It builds on a novel mathematical theory postulating fundamental material instabilities in solids with internal mass transfer associated with volumetric deformation due to elastoviscoplastic p-waves termed cnoidal waves. We show that transient cnoidal waves triggered by fault slip events can explain the characteristic distribution and extent of deformation bands and fractures within natural fault damage zones. Our model suggests that an overpressure wave propagating away from the slipping fault and the material properties of the host rock control damage-zone geometry. Hence, cnoidal-wave theory may open a new chapter for predicting seismicity, material and geometrical properties as well as the location of brittle faults.

  14. Palaeostress perturbations near the El Castillo de las Guardas fault (SW Iberian Massif)

    NASA Astrophysics Data System (ADS)

    García-Navarro, Encarnación; Fernández, Carlos

    2010-05-01

    Use of stress inversion methods on faults measured at 33 sites located at the northwestern part of the South Portuguese Zone (Variscan Iberian Massif), and analysis of the basic dyke attitude at this same region, has revealed a prominent perturbation of the stress trajectories around some large, crustal-scale faults, like the El Castillo de las Guardas fault. The results are compared with the predictions of theoretical models of palaeostress deviations near master faults. According to this comparison, the El Castillo de las Guardas fault, an old structure that probably reversed several times its slip sense, can be considered as a sinistral strike-slip fault during the Moscovian. These results also point out the main shortcomings that still hinder a rigorous quantitative use of the theoretical models of stress perturbations around major faults: the spatial variation in the parameters governing the brittle behaviour of the continental crust, and the possibility of oblique slip along outcrop-scale faults in regions subjected to general, non-plane strain.

  15. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    NASA Astrophysics Data System (ADS)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  16. Postseismic deformation associated with the 2008 Mw 7.9 Wenchuan earthquake, China: Constraining fault geometry and investigating a detailed spatial distribution of afterslip

    NASA Astrophysics Data System (ADS)

    Jiang, Zhongshan; Yuan, Linguo; Huang, Dingfa; Yang, Zhongrong; Chen, Weifeng

    2017-12-01

    We reconstruct two types of fault models associated with the 2008 Mw 7.9 Wenchuan earthquake, one is a listric fault connecting a shallowing sub-horizontal detachment below ∼20 km depth (fault model one, FM1) and the other is a group of more steeply dipping planes further extended to the Moho at ∼60 km depth (fault model two, FM2). Through comparative analysis of the coseismic inversion results, we confirm that the coseismic models are insensitive to the above two type fault geometries. We therefore turn our attention to the postseismic deformation obtained from GPS observations, which can not only impose effective constraints on the fault geometry but also, more importantly, provide valuable insights into the postseismic afterslip. Consequently, FM1 performs outstandingly in the near-, mid-, and far-field, whether considering the viscoelastic influence or not. FM2 performs more poorly, especially in the data-model consistency in the near field, which mainly results from the trade-off of the sharp contrast of the postseismic deformation on both sides of the Longmen Shan fault zone. Accordingly, we propose a listric fault connecting a shallowing sub-horizontal detachment as the optimal fault geometry for the Wenchuan earthquake. Based on the inferred optimal fault geometry, we analyse two characterized postseismic deformation phenomena that differ from the coseismic patterns: (1) the postseismic opposite deformation between the Beichuan fault (BCF) and Pengguan fault (PGF) and (2) the slightly left-lateral strike-slip motions in the southwestern Longmen Shan range. The former is attributed to the local left-lateral strike-slip and normal dip-slip components on the shallow BCF. The latter places constraints on the afterslip on the southwestern BCF and reproduces three afterslip concentration areas with slightly left-lateral strike-slip motions. The decreased Coulomb Failure Stress (CFS) change ∼0.322 KPa, derived from the afterslip with viscoelastic influence removed at the hypocentre of the Lushan earthquake, indicates that the postseismic left-lateral strike-slip and normal dip-slip motions may have a mitigative effect on the fault loading in the southwestern Longmen Shan range. Nevertheless, it is much smaller than the total increased CFS changes (∼8.368 KPa) derived from the coseismic and viscoelastic deformations.

  17. Fault-tolerant continuous flow systems modelling

    NASA Astrophysics Data System (ADS)

    Tolbi, B.; Tebbikh, H.; Alla, H.

    2017-01-01

    This paper presents a structural modelling of faults with hybrid Petri nets (HPNs) for the analysis of a particular class of hybrid dynamic systems, continuous flow systems. HPNs are first used for the behavioural description of continuous flow systems without faults. Then, faults' modelling is considered using a structural method without having to rebuild the model to new. A translation method is given in hierarchical way, it gives a hybrid automata (HA) from an elementary HPN. This translation preserves the behavioural semantics (timed bisimilarity), and reflects the temporal behaviour by giving semantics for each model in terms of timed transition systems. Thus, advantages of the power modelling of HPNs and the analysis ability of HA are taken. A simple example is used to illustrate the ideas.

  18. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  19. Fault model of the 2014 Cephalonia seismic sequence - Evidence of spatiotemporal fault segmentation along the NW edge of Aegean Arc

    NASA Astrophysics Data System (ADS)

    Saltogianni, Vasso; Moschas, Fanis; Stiros, Stathis

    2017-04-01

    Finite fault models (FFM) are presented for the two main shocks of the 2014 Cephalonia (Ionian Sea, Greece) seismic sequence (M 6.0) which produced extreme peak ground accelerations ( 0.7g) in the west edge of the Aegean Arc, an area in which the poor coverage by seismological and GPS/INSAR data makes FFM a real challenge. Modeling was based on co-seismic GPS data and on the recently introduced TOPological INVersion algorithm. The latter is a novel uniform grid search-based technique in n-dimensional spaces, is based on the concept of stochastic variables and which can identify multiple unconstrained ("free") solutions in a specified search space. Derived FFMs for the 2014 earthquakes correspond to an essentially strike slip fault and of part of a shallow thrust, the surface projection of both of which run roughly along the west coast of Cephalonia. Both faults correlate with pre-existing faults. The 2014 faults, in combination with the faults of the 2003 and 2015 Leucas earthquakes farther NE, form a string of oblique slip, partly overlapping fault segments with variable geometric and kinematic characteristics along the NW edge of the Aegean Arc. This composite fault, usually regarded as the Cephalonia Transform Fault, accommodates shear along this part of the Arc. Because of the highly fragmented crust, dominated by major thrusts in this area, fault activity is associated with 20km long segments and magnitude 6.0-6.5 earthquakes recurring in intervals of a few seconds to 10 years.

  20. Modelling Fault Zone Evolution: Implications for fluid flow.

    NASA Astrophysics Data System (ADS)

    Moir, H.; Lunn, R. J.; Shipton, Z. K.

    2009-04-01

    Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.

  1. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  2. Neotectonics of Asia: Thin-shell finite-element models with faults

    NASA Technical Reports Server (NTRS)

    Kong, Xianghong; Bird, Peter

    1994-01-01

    As India pushed into and beneath the south margin of Asia in Cenozoic time, it added a great volume of crust, which may have been (1) emplaced locally beneath Tibet, (2) distributed as regional crustal thickening of Asia, (3) converted to mantle eclogite by high-pressure metamorphism, or (4) extruded eastward to increase the area of Asia. The amount of eastward extrusion is especially controversial: plane-stress computer models of finite strain in a continuum lithosphere show minimal escape, while laboratory and theoretical plane-strain models of finite strain in a faulted lithosphere show escape as the dominant mode. We suggest computing the present (or neo)tectonics by use of the known fault network and available data on fault activity, geodesy, and stress to select the best model. We apply a new thin-shell method which can represent a faulted lithosphere of realistic rheology on a sphere, and provided predictions of present velocities, fault slip rates, and stresses for various trial rheologies and boundary conditions. To minimize artificial boundaries, the models include all of Asia east of 40 deg E and span 100 deg on the globe. The primary unknowns are the friction coefficient of faults within Asia and the amounts of shear traction applied to Asia in the Himalayan and oceanic subduction zones at its margins. Data on Quaternary fault activity prove to be most useful in rating the models. Best results are obtained with a very low fault friction of 0.085. This major heterogeneity shows that unfaulted continum models cannot be expected to give accurate simulations of the orogeny. But, even with such weak faults, only a fraction of the internal deformation is expressed as fault slip; this means that rigid microplate models cannot represent the kinematics either. A universal feature of the better models is that eastern China and southeast Asia flow rapidly eastward with respect to Siberia. The rate of escape is very sensitive to the level of shear traction in the Pacific subduction zones, which is below 6 MPa. Because this flow occurs across a wide range of latitudes, the net eastward escape is greater than the rate of crustal addition in the Himalaya. The crustal budget is balanced by extension and thinning, primarily within the Tibetan plateau and the Baikal rift. The low level of deviation stresses in the best models suggests that topographic stress plays a major role in the orogeny; thus, we have to expect that different topography in the past may have been linked with fundamentally different modes of continental collision.

  3. Modeling River Incision Across Active Normal Faults Using the Channel-Hillslope Integrated Landscape Development Model (CHILD): the case of the Central Apennines (Italy)

    NASA Astrophysics Data System (ADS)

    Attal, M.; Tucker, G.; Whittaker, A.; Cowie, P.; Roberts, G.

    2005-12-01

    River systems constitute some of the most efficient agents that shape terrestrial landscapes. Fluvial incision rates govern landscape evolution but, due to the variety of processed involved and the difficulty of quantifying them in the field, there is no "universal theory" describing the way rivers incise into bedrock. The last decades have seen the birth of numerous fluvial incision laws associated with models that assign different roles to hydrodynamic variables and to sediments. In order to discriminate between models and constrain their parameters, the transient response of natural river systems to a disturbance (tectonic or climatic) can be used. Indeed, the different models predict different kinds of transient response whereas most models predict a similar power law relationship between slope and drainage area at equilibrium. To this end, a coupled field - modeling study is in progress. The field area consists of the Central Apennines that are subject to active faulting associated with a regional extensional regime. Fault initiation occurred 3 My ago, associated with throw rates of 0.3 +/- 0.2 mm/yr. Due to fault interaction and linkage, the throw rate on the faults located near the center of the fault system increased dramatically 0.7 My ago (up to 2 mm/yr), whereas slip rates on distal faults either decayed or remained approximately constant. The present study uses the landscape evolution model, CHILD, to examine the behavior of rivers draining across these active faults. Distal and central faults are considered in order to track the effects of the fault acceleration on the development of the fluvial network. River characteristics have been measured in the field (e.g. channel width, slope, sediment grain size) and extracted from a 20m DEM (e.g. channel profile, drainage area). We use CHILD to test the ability of alternative incision laws to reproduce observed topography under known tectonic forcing. For each of the fluvial incision models, a Monte-Carlo simulation has been performed, allowing the exploration of a wide range of values for the different parameters relative to tectonic, climate, sediment characteristics, and channel geometry. Observed profiles are consistent with a dominantly wave-like, as opposed to diffusive, transient response to accelerated fault motion. The ability of the different models to reproduce more or less accurately the catchment characteristics, in particular the specific profiles exhibited by the rivers, are discussed in light of our first results.

  4. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  5. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  6. The role of fault surface geometry in the evolution of the fault deformation zone: comparing modeling with field example from the Vignanotica normal fault (Gargano, Southern Italy).

    NASA Astrophysics Data System (ADS)

    Maggi, Matteo; Cianfarra, Paola; Salvini, Francesco

    2013-04-01

    Faults have a (brittle) deformation zone that can be described as the presence of two distintive zones: an internal Fault core (FC) and an external Fault Damage Zone (FDZ). The FC is characterized by grinding processes that comminute the rock grains to a final grain-size distribution characterized by the prevalence of smaller grains over larger, represented by high fractal dimensions (up to 3.4). On the other hand, the FDZ is characterized by a network of fracture sets with characteristic attitudes (i.e. Riedel cleavages). This deformation pattern has important consequences on rock permeability. FC often represents hydraulic barriers, while FDZ, with its fracture connection, represents zones of higher permability. The observation of faults revealed that dimension and characteristics of FC and FDZ varies both in intensity and dimensions along them. One of the controlling factor in FC and FDZ development is the fault plane geometry. By changing its attitude, fault plane geometry locally alter the stress component produced by the fault kinematics and its combination with the bulk boundary conditions (regional stress field, fluid pressure, rocks rheology) is responsible for the development of zones of higher and lower fracture intensity with variable extension along the fault planes. Furthermore, the displacement along faults provides a cumulative deformation pattern that varies through time. The modeling of the fault evolution through time (4D modeling) is therefore required to fully describe the fracturing and therefore permeability. In this presentation we show a methodology developed to predict distribution of fracture intensity integrating seismic data and numerical modeling. Fault geometry is carefully reconstructed by interpolating stick lines from interpreted seismic sections converted to depth. The modeling is based on a mixed numerical/analytical method. Fault surface is discretized into cells with their geometric and rheological characteristics. For each cell, the acting stress and strength are computed by analytical laws (Coulomb failure). Total brittle deformation for each cell is then computed by cumulating the brittle failure values along the path of each cell belonging to one side onto the facing one. The brittle failure value is provided by the DF function, that is the difference between the computed shear and the strength of the cell at each step along its path by using the Frap in-house developed software. The width of the FC and the FDZ are computed as a function of the DF distribution and displacement around the fault. This methodology has been successfully applied to model the brittle deformation pattern of the Vignanotica normal fault (Gargano, Southern Italy) where fracture intensity is expressed by the dimensionless H/S ratio representing the ratio between the dimension and the spacing of homologous fracture sets (i.e., group of parallel fractures that can be ascribed to the same event/stage/stress field).

  7. Application of an Integrated Assessment Model to the Kevin Dome site, Montana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Minh; Zhang, Ye; Carey, James William

    The objectives of the Integrated Assessment Model is to enable the Fault Swarm algorithm in the National Risk Assessment Partnership, ensure faults are working in the NRAP-IAM tool, calculate hypothetical fault leakage in NRAP-IAM, and compare leakage rates to Eclipse simulations.

  8. Constraints on the stress state of the San Andreas Fault with analysis based on core and cuttings from San Andreas Fault Observatory at Depth (SAFOD) drilling phases 1 and 2

    USGS Publications Warehouse

    Tembe, S.; Lockner, D.; Wong, T.-F.

    2009-01-01

    Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (?? 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature-and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (?????0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress. Copyright 2009 by the American Geophysical Union.

  9. Effective regurgitant orifice area by the color Doppler flow convergence method for evaluating the severity of chronic aortic regurgitation. An animal study.

    PubMed

    Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J

    1996-02-01

    The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.

  10. The seismogenic Gole Larghe Fault Zone (Italian Southern Alps): quantitative 3D characterization of the fault/fracture network, mapping of evidences of fluid-rock interaction, and modelling of the hydraulic structure through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2016-12-01

    The Gole Larghe Fault Zone (GLFZ) was exhumed from 8 km depth, where it was characterized by seismic activity (pseudotachylytes) and hydrous fluid flow (alteration halos and precipitation of hydrothermal minerals in veins and cataclasites). Thanks to glacier-polished outcrops exposing the 400 m-thick fault zone over a continuous area > 1.5 km2, the fault zone architecture has been quantitatively described with an unprecedented detail, providing a rich dataset to generate 3D Discrete Fracture Network (DFN) models and simulate the fault zone hydraulic properties. The fault and fracture network has been characterized combining > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed obtaining robust probability density functions for parameters of fault and fracture sets: orientation, fracture intensity and density, spacing, persistency, length, thickness/aperture, termination. The spatial distribution of fractures (random, clustered, anticlustered…) has been characterized with geostatistics. Evidences of fluid/rock interaction (alteration halos, hydrothermal veins, etc.) have been mapped on the same outcrops, revealing sectors of the fault zone strongly impacted, vs. completely unaffected, by fluid/rock interaction, separated by convolute infiltration fronts. Field and microstructural evidence revealed that higher permeability was obtained in the syn- to early post-seismic period, when fractures were (re)opened by off-fault deformation. We have developed a parametric hydraulic model of the GLFZ and calibrated it, varying the fraction of faults/fractures that were open in the post-seismic, with the goal of obtaining realistic fluid flow and permeability values, and a flow pattern consistent with the observed alteration/mineralization pattern. The fraction of open fractures is very close to the percolation threshold of the DFN, and the permeability tensor is strongly anisotropic, resulting in a marked channelling of fluid flow in the inner part of the fault zone. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, a fundamental mechanical parameter in the energy balance of earthquakes.

  11. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  12. A general law of fault wear and its implication to gouge zone evolution

    NASA Astrophysics Data System (ADS)

    Boneh, Yuval; Reches, Ze'ev

    2017-04-01

    Fault wear and gouge production are universal components of frictional sliding. Wear models commonly consider fault roughness, normal stress and rock strength, but ignore the effects of gouge presence and slip-velocity. In contrast, our experimental observations indicate that wear continues while gouge layer is fully developed, and that wear-rates vary by orders-of-magnitude during slip along experimental faults made of carbonites, sandstones and granites (Boneh et al., 2013, 2014). We derive here a new universal law for fault wear by incorporating the gouge layer and slip-velocity. Slip between two rock-blocks undergoes a transition from a 'two-body' mode, during which the blocks interact at surface roughness contacts, to 'three-body' mode, during which a gouge layer separates the two blocks. Our wear model considers 'effective roughness' as the mechanism for failure at resisting, interacting sites that control the global wear. The effective roughness is comprised of a time dependent, dynamic asperities which are different in population and scale from original surfaces asperities. The model assumes that the intensity of this failure is proportional to the mechanical impulse, which is the integrated force over loading time at the interacting sites. We use this concept to calculate the wear-rate as function of the impulse-density, which is the ratio [shear-stress/slip-velocity], during fault slip. The compilation of experimental wear-rates in a large range of slip-velocities (10 μm/s - 1 m/s) and normal stresses (0.2 - 200 MPa) reveal very good agreement with the model predictions. The model provides the first explanation why fault slip at seismic velocity, e.g., 1 m/s, generates significantly less wear and gouge than fault slip at creeping velocity. Thus, the model provides a tool to use the gouge thickness of fault-zones for estimation of paleo-velocity. Boneh, Y., Sagy, A., Reches, Z., 2013. Frictional strength and wear-rate of carbonate faults during high-velocity, steady-state sliding. Earth and Planetary Science Letters 381, 127-137. Boneh, Y., Chang, J.C., Lockner, D.A., Reches, Z., 2014. Evolution of Wear and Friction Along Experimental Faults. Pure and Applied Geophysics, 1-17.

  13. Large-Scale Multiphase Flow Modeling of Hydrocarbon Migration and Fluid Sequestration in Faulted Cenozoic Sedimentary Basins, Southern California

    NASA Astrophysics Data System (ADS)

    Jung, B.; Garven, G.; Boles, J. R.

    2011-12-01

    Major fault systems play a first-order role in controlling fluid migration in the Earth's crust, and also in the genesis/preservation of hydrocarbon reservoirs in young sedimentary basins undergoing deformation, and therefore understanding the geohydrology of faults is essential for the successful exploration of energy resources. For actively deforming systems like the Santa Barbara Basin and Los Angeles Basin, we have found it useful to develop computational geohydrologic models to study the various coupled and nonlinear processes affecting multiphase fluid migration, including relative permeability, anisotropy, heterogeneity, capillarity, pore pressure, and phase saturation that affect hydrocarbon mobility within fault systems and to search the possible hydrogeologic conditions that enable the natural sequestration of prolific hydrocarbon reservoirs in these young basins. Subsurface geology, reservoir data (fluid pressure-temperature-chemistry), structural reconstructions, and seismic profiles provide important constraints for model geometry and parameter testing, and provide critical insight on how large-scale faults and aquifer networks influence the distribution and the hydrodynamics of liquid and gas-phase hydrocarbon migration. For example, pore pressure changes at a methane seepage site on the seafloor have been carefully analyzed to estimate large-scale fault permeability, which helps to constrain basin-scale natural gas migration models for the Santa Barbara Basin. We have developed our own 2-D multiphase finite element/finite IMPES numerical model, and successfully modeled hydrocarbon gas/liquid movement for intensely faulted and heterogeneous basin profiles of the Los Angeles Basin. Our simulations suggest that hydrocarbon reservoirs that are today aligned with the Newport-Inglewood Fault Zone were formed by massive hydrocarbon flows from deeply buried source beds in the central synclinal region during post-Miocene time. Fault permeability, capillarity forces between the fault and juxtaposition of aquifers/aquitards, source oil saturation, and rate of generation control the efficiency of a petroleum trap and carbon sequestration. This research is focused on natural processes in real geologic systems, but our results will also contribute to an understanding of the subsurface behavior of injected anthropogenic greenhouse gases, especially when targeted storage sites may be influenced by regional faults, which are ubiquitous in the Earth's crust.

  14. Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project

    USGS Publications Warehouse

    Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian

    2012-01-01

    Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.

  15. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  16. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  17. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  18. The effect of gradational velocities and anisotropy on fault-zone trapped waves

    NASA Astrophysics Data System (ADS)

    Gulley, A. K.; Eccles, J. D.; Kaipio, J. P.; Malin, P. E.

    2017-08-01

    Synthetic fault-zone trapped wave (FZTW) dispersion curves and amplitude responses for FL (Love) and FR (Rayleigh) type phases are analysed in transversely isotropic 1-D elastic models. We explore the effects of velocity gradients, anisotropy, source location and mechanism. These experiments suggest: (i) A smooth exponentially decaying velocity model produces a significantly different dispersion curve to that of a three-layer model, with the main difference being that Airy phases are not produced. (ii) The FZTW dispersion and amplitude information of a waveguide with transverse-isotropy depends mostly on the Shear wave velocities in the direction parallel with the fault, particularly if the fault zone to country-rock velocity contrast is small. In this low velocity contrast situation, fully isotropic approximations to a transversely isotropic velocity model can be made. (iii) Fault-aligned fractures and/or bedding in the fault zone that cause transverse-isotropy enhance the amplitude and wave-train length of the FR type FZTW. (iv) Moving the source and/or receiver away from the fault zone removes the higher frequencies first, similar to attenuation. (v) In most physically realistic cases, the radial component of the FR type FZTW is significantly smaller in amplitude than the transverse.

  19. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  20. Enhanced data validation strategy of air quality monitoring network.

    PubMed

    Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem

    2018-01-01

    Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.

    PubMed

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-04-28

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.

  2. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids

    PubMed Central

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-01-01

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925

  3. Geodetic estimates of fault slip rates in the San Francisco Bay area

    USGS Publications Warehouse

    Savage, J.C.; Svarc, J.L.; Prescott, W.H.

    1999-01-01

    Bourne et al. [1998] have suggested that the interseismic velocity profile at the surface across a transform plate boundary is a replica of the secular velocity profile at depth in the plastosphere. On the other hand, in the viscoelastic coupling model the shape of the interseismic surface velocity profile is a consequence of plastosphere relaxation following the previous rupture of the faults that make up the plate boundary and is not directly related to the secular flow in the plastosphere. The two models appear to be incompatible. If the plate boundary is composed of several subparallel faults and the interseismic surface velocity profile across the boundary known, each model predicts the secular slip rates on the faults which make up the boundary. As suggested by Bourne et al., the models can then be tested by comparing the predicted secular slip rates to those estimated from long-term offsets inferred from geology. Here we apply that test to the secular slip rates predicted for the principal faults (San Andreas, San Gregorio, Hayward, Calaveras, Rodgers Creek, Green Valley and Greenville faults) in the San Andreas fault system in the San Francisco Bay area. The estimates from the two models generally agree with one another and to a lesser extent with the geologic estimate. Because the viscoelastic coupling model has been equally successful in estimating secular slip rates on the various fault strands at a diffuse plate boundary, the success of the model of Bourne et al. [1998] in doing the same thing should not be taken as proof that the interseismic velocity profile across the plate boundary at the surface is a replica of the velocity profile at depth in the plastosphere.

  4. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  5. Salton Trough Post-seismic Afterslip, Viscoelastic Response, and Contribution to Regional Hazard

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Lyzenga, G. A.

    2012-12-01

    The El Mayor-Cucapah M7.2 April 4 2010 earthquake in Baja California may have affected accumulated hazard to Southern California cities due to loading of regional faults including the Elsinore, San Jacinto and southern San Andreas, faults which already have over a century of tectonic loading. We examine changes observed via multiple seismic and geodetic techniques, including micro seismicity and proposed seismicity-based indicators of hazard, high-quality fault models, the Plate Boundary Observatory GNSS array (with 174 stations showing post-seismic transients with greater than 1 mm amplitude), and interferometric radar maps from UAVSAR (aircraft) flights, showing a network of aseismic fault slip events at distances up to 60 km from the end of the surface rupture. Finite element modeling is used to compute the expected coseismic motions at GPS stations with general agreement, including coseismic uplift at sites ~200 km north of the rupture. Postseismic response is also compared, with GNSS and also with the CIG software "RELAX." An initial examination of hazard is made comparing micro seismicity-based metrics, fault models, and changes to coulomb stress on nearby faults using the finite element model. Comparison of seismicity with interferograms and historic earthquakes show aseismic slip occurs on fault segments that have had earthquakes in the last 70 years, while other segments show no slip at the surface but do show high triggered seismicity. UAVSAR-based estimates of fault slip can be incorporated into the finite element model to correct Coloumb stress change.

  6. Initiation, evolution and extinction of pull-apart basins: Implications for opening of the Gulf of California

    NASA Astrophysics Data System (ADS)

    van Wijk, J.; Axen, G.; Abera, R.

    2017-11-01

    We present a model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal elastic models of deformation, field observations, and fault theory, and is generally applicable to basin-scale features, but predicts some intra-basin structural features. Geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step-over, which results from the forming phase of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement, and the fault tips propagate beyond the rift basin, increasing the distance between the fault tips and pull-apart basin center. Because uplift is concentrated near the fault tips, the sediment source areas may rejuvenate and migrate over time. Rift flank uplift results from compression along the flank of the basin. With increasing strike-slip movement the basins deepen and lengthen. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because basin-bounding strike-slip systems tend to straighten and connect as they evolve. The models show that larger length-to-width ratios with overlapping faults are least likely to form basin-crossing faults, and pull-apart basins with this geometry are thus most likely to progress to continental rupture. In the Gulf of California, larger length-to-width ratios are found in the southern Gulf, which is the region where continental breakup occurred rapidly. The initial geometry in the northern Gulf of California and Salton Trough at 6 Ma may have been one of widely-spaced master strike-slip faults (lower length-to-width ratios), which our models suggest inhibits continental breakup and favors straightening of the strike-slip system by formation of basin-crossing faults within the step-over, as began 1.2 Ma when the San Jacinto and Elsinore - Cerro Prieto fault systems formed.

  7. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  8. Fault detection and diagnosis of photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Wu, Xing

    The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.

  9. Compilation of Surface Creep on California Faults and Comparison of WGCEP 2007 Deformation Model to Pacific-North American Plate Motion

    USGS Publications Warehouse

    Wisely, Beth A.; Schmidt, David A.; Weldon, Ray J.

    2008-01-01

    This Appendix contains 3 sections that 1) documents published observations of surface creep on California faults, 2) constructs line integrals across the WG-07 deformation model to compare to the Pacific ? North America plate motion, and 3) constructs strain tensors of volumes across the WG-07 deformation model to compare to the Pacific ? North America plate motion. Observation of creep on faults is a critical part of our earthquake rupture model because if a fault is observed to creep the moment released as earthquakes is reduced from what would be inferred directly from the fault?s slip rate. There is considerable debate about how representative creep measured at the surface during a short time period is of the whole fault surface through the entire seismic cycle (e.g. Hudnut and Clark, 1989). Observationally, it is clear that the amount of creep varies spatially and temporally on a fault. However, from a practical point of view a single creep rate is associated with a fault section and the reduction in seismic moment generated by the fault is accommodated in seismic hazard models by reducing the surface area that generates earthquakes or by reducing the slip rate that is converted into seismic energy. WG-07 decided to follow the practice of past Working Groups and the National Seismic Hazard Map and used creep rate (where it was judged to be interseismic, see Table P1) to reduce the area of the fault surface that generates seismic events. In addition to following past practice, this decision allowed the Working Group to use a reduction of slip rate as a separate factor to accommodate aftershocks, post seismic slip, possible aseismic permanent deformation along fault zones and other processes that are inferred to affect the entire surface area of a fault, and thus are better modeled as a reduction in slip rate. C-zones are also handled by a reduction in slip rate, because they are inferred to include regions of widely distributed shear that is not completely expressed as earthquakes large enough to model. Because the ratio of the rate of creep relative to the total slip rate is often used to infer the average depth of creep, the ?depth? of creep can be calculated and used to reduce the surface area of a fault that generates earthquakes in our model. This reduction of surface area of rupture is described by an ?aseismicity factor,? assigned to each creeping fault in Appendix A. An aseismicity factor of less than 1 is only assigned to faults that are inferred to creep during the entire interseismic period. A single aseismicity factor was chosen for each section of the fault that creeps by expert opinion from the observations documented here. Uncertainties were not determined for the aseismicity factor, and thus it represents an unmodeled (and difficult to model) source of error. This Appendix simply provides the documentation of known creep, the type and precision of its measurement, and attempts to characterize the creep as interseismic, afterslip, transient or triggered. Parts 2 and 3 of this Appendix compare the WG-07 deformation model and the seismic source model it generates to the strain generated by the Pacific - North American plate motion. The concept is that plate motion generates essentially all of the elastic strain in the vicinity of the plate boundary that can be released as earthquakes. Adding up the slip rates on faults and all others sources of deformation (such as C-zones and distributed ?background? seismicity) should approximately yield the plate motion. This addition is usually accomplished by one of four approaches: 1) line integrals that sum deformation along discrete paths through the deforming zone between the two plates, 2) seismic moment tensors that add up seismic moment of a representative set of earthquakes generated by a crustal volume spanning the plate boundary, 3) strain tensors generated by adding up the strain associated with all of the faults in a crustal volume spanning the plate

  10. Stressing of the New Madrid seismic zone by a lower crust detachment fault

    USGS Publications Warehouse

    Stuart, W.D.; Hildenbrand, T.G.; Simpson, R.W.

    1997-01-01

    A new mechanical model for the cause of the New Madrid seismic zone in the central United States is analyzed. The model contains a subhorizontal detachment fault which is assumed to be near the domed top surface of locally thickened anomalous lower crust ("rift pillow"). Regional horizontal compression induces slip on the fault, and the slip creates a stress concentration in the upper crust above the rift pillow dome. In the coseismic stage of the model earthquake cycle, where the three largest magnitude 7-8 earthquakes in 1811-1812 are represented by a single model mainshock on a vertical northeast trending fault, the model mainshock has a moment equivalent to a magnitude 8 event. During the interseismic stage, corresponding to the present time, slip on the detachment fault exerts a right-lateral shear stress on the locked vertical fault whose failure produces the model mainshock. The sense of shear is generally consistent with the overall sense of slip of 1811-1812 and later earthquakes. Predicted rates of horizontal strain at the ground surface are about 10-7 year-1 and are comparable to some observed rates. The model implies that rift pillow geometry is a significant influence on the maximum possible earthquake magnitude.

  11. Building a risk-targeted regional seismic hazard model for South-East Asia

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  12. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  13. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  14. Seismic constraints on the architecture of the Newport-Inglewood/Rose Canyon fault: Implications for the length and magnitude of future earthquake ruptures

    NASA Astrophysics Data System (ADS)

    Sahakian, Valerie; Bormann, Jayne; Driscoll, Neal; Harding, Alistair; Kent, Graham; Wesnousky, Steve

    2017-03-01

    The Newport-Inglewood/Rose Canyon (NIRC) fault zone is an active strike-slip fault system within the Pacific-North American plate boundary in Southern California, located in close proximity to populated regions of San Diego, Orange, and Los Angeles counties. Prior to this study, the NIRC fault zone's continuity and geometry were not well constrained. Nested marine seismic reflection data with different vertical resolutions are employed to characterize the offshore fault architecture. Four main fault strands are identified offshore, separated by three main stepovers along strike, all of which are 2 km or less in width. Empirical studies of historical ruptures worldwide show that earthquakes have ruptured through stepovers with this offset. Models of Coulomb stress change along the fault zone are presented to examine the potential extent of future earthquake ruptures on the fault zone, which appear to be dependent on the location of rupture initiation and fault geometry at the stepovers. These modeling results show that the southernmost stepover between the La Jolla and Torrey Pines fault strands may act as an inhibitor to throughgoing rupture due to the stepover width and change in fault geometry across the stepover; however, these results still suggest that rupture along the entire fault zone is possible.

  15. Spatial Patterns of Geomorphic Surface Features and Fault Morphology Based on Diffusion Equation Modeling of the Kumroch Fault Kamchatka Peninsula, Russia

    NASA Astrophysics Data System (ADS)

    Heinlein, S. N.

    2013-12-01

    Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.

  16. Do mesoscale faults in a young fold belt indicate regional or local stress?

    NASA Astrophysics Data System (ADS)

    Kokado, Akihiro; Yamaji, Atsushi; Sato, Katsushi

    2017-04-01

    The result of paleostress analyses of mesoscale faults is usually thought of as evidence of a regional stress. On the other hand, the recent advancement of the trishear modeling has enabled us to predict the deformation field around fault-propagation folds without the difficulty of assuming paleo mechanical properties of rocks and sediments. We combined the analysis of observed mesoscale faults and the trishear modeling to understand the significance of regional and local stresses for the formation of mesoscale faults. To this end, we conducted the 2D trishear inverse modeling with a curved thrust fault to predict the subsurface structure and strain field of an anticline, which has a more or less horizontal axis and shows a map-scale plane strain perpendicular to the axis, in the active fold belt of Niigata region, central Japan. The anticline is thought to have been formed by fault-propagation folding under WNW-ESE regional compression. Based on the attitudes of strata and the positions of key tephra beds in Lower Pleistocene soft sediments cropping out at the surface, we obtained (1) a fault-propagation fold with the fault tip at a depth of ca. 4 km as the optimal subsurface structure, and (2) the temporal variation of deformation field during the folding. We assumed that mesoscale faults were activated along the direction of maximum shear strain on the faults to test whether the fault-slip data collected at the surface were consistent with the deformation in some stage(s) of folding. The Wallace-Bott hypothesis was used to estimate the consistence of faults with the regional stress. As a result, the folding and the regional stress explained 27 and 33 of 45 observed faults, respectively, with the 11 faults being consistent with the both. Both the folding and regional one were inconsistent with the remaining 17 faults, which could be explained by transfer faulting and/or the gravitational spreading of the growing anticline. The lesson we learnt from this work was that we should pay attention not only to regional but also to local stresses to interpret the results of paleostress analysis in the shallow levels of young orogenic belts.

  17. Newport-Inglewood-Carlsbad-Coronado Bank Fault System Nearshore Southern California: Testing models for Quaternary deformation

    NASA Astrophysics Data System (ADS)

    Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.

    2011-12-01

    The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four younger Quaternary unconformities across portions of these faults to test whether the post- ~1.8 Ma deformation continues into late Quaternary. This will provide critical information for a meaningful assessment of the seismic hazards facing Newport beach through metropolitan San Diego.

  18. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei

    2015-10-01

    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  19. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  20. The rotation and fracture history of Europa from modeling of tidal-tectonic processes

    NASA Astrophysics Data System (ADS)

    Rhoden, Alyssa Rose

    Europa's surface displays a complex history of tectonic activity, much of which has been linked to tidal stress caused by Europa's eccentric orbit and possibly non-synchronous rotation of the ice shell. Cycloids are arcuate features thought to have formed in response to tidal normal stress while strike-slip motion along preexisting faults has been attributed to tidal shear stress. Tectonic features thus provide constraints on the rotational parameters that govern tidal stress, and can help us develop an understanding of the tidal-tectonic processes operating on ice covered ocean moons. In the first part of this work (Chapter 3), I test tidal models that include obliquity, fast precession, stress due to non-synchronous rotation (NSR), and physical libration by comparing how well each model reproduces observed cycloids. To do this, I have designed and implemented an automated parameter-searching algorithm that relies on a quantitative measure of fit quality to identify the best fits to observed cycloids. I apply statistical techniques to determine the tidal model best supported by the data and constrain the values of Europa's rotational parameters. Cycloids indicate a time-varying obliquity of about 1° and a physical libration in phase with the eccentricity libration, with amplitude >1°. To obtain good fits, cycloids must be translated in longitude, which implies non-synchronous rotation of the icy shell. However, stress from NSR is not well-supported, indicating that the rotation rate is slow enough that these stresses relax. I build upon the results of cycloid modeling in the second section by applying calculations of tidal stress that include obliquity to the formation of strike-slip faults. I predict the slip directions of faults with the standard formation model---tidal walking (Chapter 5)---and with a new mechanical model I have developed, called shell tectonics (Chapter 6). The shell tectonics model incorporates linear elasticity to determine slip and stress release on faults and uses a Coulomb failure criterion. Both of these models can be used to predict the direction of net displacement along faults. Until now, the tidal walking model has been the only model that reproduces the observed global pattern of strike-slip displacement; the shell tectonics model incorporates a more physical treatment of fault mechanics and reproduces this global pattern. Both models fit the regional patterns of observed strike-slip faults better when a small obliquity is incorporated into calculations of tidal stresses. Shell tectonics is also distinct from tidal walking in that it calculates the relative growth rates of displacements in addition to net slip direction. Examining these growth rates, I find that certain azimuths and locations develop offsets more quickly than others. Because faults with larger offsets are easier to identify, this may explain why observed faults cluster in azimuth in many regions. The growth rates also allow for a more sophisticated statistical comparison between the predictions and observations. Although the slip directions of >75% of faults are correctly predicted using shell tectonics and 1° of obliquity, a portion of these faults could be fit equally well with a random model. Examining these faults in more detail has revealed a region of Europa in which strike-slip faults likely formed through local extensional and compressional deformation rather than as a result of tidal shear stress. This approach enables a better understanding of the tectonic record, which has implications on Europa's rotation history.

  1. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  2. Torsional vibration signal analysis as a diagnostic tool for planetary gear fault detection

    NASA Astrophysics Data System (ADS)

    Xue, Song; Howard, Ian

    2018-02-01

    This paper aims to investigate the effectiveness of using the torsional vibration signal as a diagnostic tool for planetary gearbox faults detection. The traditional approach for condition monitoring of the planetary gear uses a stationary transducer mounted on the ring gear casing to measure all the vibration data when the planet gears pass by with the rotation of the carrier arm. However, the time variant vibration transfer paths between the stationary transducer and the rotating planet gear modulate the resultant vibration spectra and make it complex. Torsional vibration signals are theoretically free from this modulation effect and therefore, it is expected to be much easier and more effective to diagnose planetary gear faults using the fault diagnostic information extracted from the torsional vibration. In this paper, a 20 degree of freedom planetary gear lumped-parameter model was developed to obtain the gear dynamic response. In the model, the gear mesh stiffness variations are the main internal vibration generation mechanism and the finite element models were developed for calculation of the sun-planet and ring-planet gear mesh stiffnesses. Gear faults on different components were created in the finite element models to calculate the resultant gear mesh stiffnesses, which were incorporated into the planetary gear model later on to obtain the faulted vibration signal. Some advanced signal processing techniques were utilized to analyses the fault diagnostic results from the torsional vibration. It was found that the planetary gear torsional vibration not only successfully detected the gear fault, but also had the potential to indicate the location of the gear fault. As a result, the planetary gear torsional vibration can be considered an effective alternative approach for planetary gear condition monitoring.

  3. Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger

    NASA Astrophysics Data System (ADS)

    Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun

    2011-04-01

    This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.

  4. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019  N·m/yr, which is a 16% increase compared with the UCERF2 model.

  5. Impact of pre- and/or syn-tectonic salt layers in the hangingwall geometry of a kinked-planar extensional fault: insights from analogue modelling and comparison with the Parentis basin (bay of Biscay)

    NASA Astrophysics Data System (ADS)

    Ferrer, O.; Vendeville, B. C.; Roca, E.

    2012-04-01

    Using sandbox analogue modelling we determine the role played by a pre-kinematic or a syn-kinematic viscous salt layer during rollover folding of the hangingwall of a normal fault with a variable kinked-planar geometry, as well as understand the origin and the mechanisms that control the formation, kinematic evolution and geometry of salt structures developed in the hangingwall of this fault. The experiments we conducted consisted of nine models made of dry quartz-sand (35μm average grain size) simulating brittle rocks and a viscous silicone polymer (SMG 36 from Dow Corning) simulating salt in nature. The models were constructed between two end walls, one of which was fixed, whereas the other was moved by a motor-driven worm screw. The fixed wall was part of the rigid footwall of the model's master border fault. This fault was simulated using three different wood block configurations, which was overlain by a flexible (but not stretchable) sheet that was attached to the mobile endwall of the model. We applied three different infill hangingwall configurations to each fault geometry: (1) without silicone (sand only), (2) sand overlain by a pre-kinematic silicone layer deposited above the entire hanginwall, and (3) sand partly overlain by a syn-kinematic silicone layer that overlain only parts of the hangingwall. All models were subjected to a 14 cm of basement extension in a direction orthogonal to that of the border fault. Results show that the presence of a viscous layer (silicone) clearly controls the deformation pattern of the hangingwall. Thus, regardless of the silicone layer's geometry (either pre- or syn-extensional) or the geometry of the extensional fault, the silicone layer acts as a very efficient detachment level separating two different structural styles in each unit. In particular, the silicone layer acts as an extensional ductile shear zone inhibiting upward propagation of normal faults and/or shears bands from the sub-silicone layers. Whereas the basement is affected by antithetic normal faults that are more or less complex depending on the geometry of the master fault, the lateral flow of the silicone produces salt-cored anticlines, walls and diapirs in the overburden of the hangingwall. The mechanical behavior of the silicone layer as an extensional shear zone, combined with the lateral changes in pressure gradients due to overburden thickness changes, triggered the silicone migration from the half-graben depocenter towards the rollover shoulder. As a result, the accumulation of silicone produces gentle silicone-cored anticlines and local diapirs with minor extensional faults. Upwards fault propagation from the sub-silicone "basement" to the supra-silicone unit only occurs either when the supra- and sub-silicone materials are welded, or when the amount of slip along the master fault is large enough so that the tip of the silicone reaches the junction between the upper and lower panels of the master faults. Comparison between the results of these models with data from the western offshore Parentis Basin (Eastern Bay of Biscay) validates the structural interpretation of this region.

  6. Modeling right-lateral offset of a Late Pleistocene terrace riser along the Polaris fault using ground based LiDAR imagery

    NASA Astrophysics Data System (ADS)

    Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.

    2009-12-01

    High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in space, creating a vector. These constructed vectors were projected to intersection with the fault plane, defining statistically significant piercing points. The distance between the coupled set of piercing points, within the plane of the fault, is the cumulative displacement vector. To assess the variability of the modeled geomorphic surfaces, including surface roughness and nonlinearity, we generated a suite of displacement models by systematically incorporating larger areas of the model domain symmetrically about the fault. Preliminary results of 10 models yield an average cumulative displacement of 5.6 m (1 Std Dev = 0.31 m). As previously described, Tioga deglaciation melt water incised into the outwash terrace leaving terrace risers that were subsequently offset by the Polaris fault. Therefore, the age of the Tioga outwash terrace represents a maximum limiting age of the tectonic displacement. Using regional age constraints of 15 to 13 kya for the Tioga outwash terrace (Benson et al., 1990; Clark and Gillespie, 1997; James et al., 2002) and the above model results, we estimate a preliminary minimum fault slip rate of 0.40 ± 0.05 mm/yr for the Polaris type-section site.

  7. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2015-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  8. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2014-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  9. Tsunami simulation using submarine displacement calculated from simulation of ground motion due to seismic source model

    NASA Astrophysics Data System (ADS)

    Akiyama, S.; Kawaji, K.; Fujihara, S.

    2013-12-01

    Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.

  10. Coseismic and postseismic slip distribution of the 2003 Mw = 6.5 Chengkung earthquake in eastern Taiwan: Elastic modeling from inversion of GPS data

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Wei; Lee, Jian-Cheng; Hu, Jyr-Ching; Chen, Horng-Yue

    2009-03-01

    The Chengkung earthquake with ML = 6.6 occurred in eastern Taiwan at 12:38 local time on December 10th 2003. Based on the main shock relocation and aftershock distribution, the Chengkung earthquake occurred along the previously recognized N20°E trending Chihshang fault. This event did not cause human loss, but significant cracks developed at the ground surface and damaged some buildings. After 1951 Taitung earthquake, there was no larger ML > 6 earthquake occurred in this region until the Chengkung earthquake. As a result, the Chengkung earthquake is a good opportunity to study the seismogenic structure of the Chihshang fault. The coseismic displacements recorded by GPS show a fan-shaped distribution with maximal displacement of about 30 cm near the epicenter. The aftershocks of the Chengkung earthquake revealing an apparent linear distribution helps us to construct the clear fault geometry of the Chihshang fault. In this study, we employ a half-space angular elastic dislocation model with GPS observations to figure out the slip distribution and seismological behavior of the Chengkung earthquake on the Chihshang fault. The elastic half-space dislocation model reveals that the Chengkung earthquake is a thrust event with minor left-lateral strike-slip component. The maximum coseismic slip is located around the depth of 20 km and up to 1.1 m. The slips are gradually decreased to less than 10 cm near the surface part of the Chihshang fault. The seismogenic fault plane, which is constructed by the delineation of the aftershocks, demonstrates that the Chihshang fault is a high-angle fault. However the fault plane changes to a flat plane at depth of 20 km. In addition, a significant part of the measured deformation across the surface fault zone for this earthquake can be attributed to postseismic creep. The postseismic elastic dislocation model shows that most afterslips are distributed to the upper level of the Chihshang fault. And most afterslips consist of both of dip- and left-lateral slip. The model results show that the Chihshang fault may be partially locked or damped near surface during coseismic slip. After the mainshock, the strain, which cumulated near the surface, was released by postseismic creep resulting in significant postseismic deformation.

  11. Extensional fault geometry and its flexural isostatic response during the formation of the Iberia - Newfoundland conjugate rifted margins

    NASA Astrophysics Data System (ADS)

    Gómez-Romeu, Júlia; Kusznir, Nick; Manatschal, Gianreto; Roberts, Alan

    2017-04-01

    Despite magma-poor rifted margins having been extensively studied for the last 20 years, the evolution of extensional fault geometry and the flexural isostatic response to faulting remain still debated topics. We investigate how the flexural isostatic response to faulting controls the structural development of the distal part of rifted margins in the hyper-extended domain and the resulting sedimentary record. In particular we address an important question concerning the geometry and evolution of extensional faults within distal hyper-extended continental crust; are the seismically observed extensional fault blocks in this region allochthons from the upper plate or are they autochthons of the lower plate? In order to achieve our aim we focus on the west Iberian rifted continental margin along the TGS and LG12 seismic profiles. Our strategy is to use a kinematic forward model (RIFTER) to model the tectonic and stratigraphic development of the west Iberia margin along TGS-LG12 and quantitatively test and calibrate the model against breakup paleo-bathymetry, crustal basement thickness and well data. RIFTER incorporates the flexural isostatic response to extensional faulting, crustal thinning, lithosphere thermal loads, sedimentation and erosion. The model predicts the structural and stratigraphic consequences of recursive sequential faulting and sedimentation. The target data used to constrain model predictions consists of two components: (i) gravity anomaly inversion is used to determine Moho depth, crustal basement thickness and continental lithosphere thinning and (ii) reverse post-rift subsidence modelling consisting of flexural backstripping, decompaction and reverse post-rift thermal subsidence modelling is used to give paleo-bathymetry at breakup time. We show that successful modelling of the structural and stratigraphic development of the TGS-LG12 Iberian margin transect also requires the simultaneous modelling of the Newfoundland conjugate margin, which we constrain using target data from the SCREECH 2 seismic profile. We also show that for the successful modelling and quantitative validation of the lithosphere hyper-extension stage it is necessary to first have a good calibrated model of the necking phase. Not surprisingly the evolution of a rifted continental margin cannot be modelled without modelling and calibration of its conjugate margin.

  12. Slip distribution, strain accumulation and aseismic slip on the Chaman Fault system

    NASA Astrophysics Data System (ADS)

    Amelug, F.

    2015-12-01

    The Chaman fault system is a transcurrent fault system developed due to the oblique convergence of the India and Eurasia plates in the western boundary of the India plate. To evaluate the contemporary rates of strain accumulation along and across the Chaman Fault system, we use 2003-2011 Envisat SAR imagery and InSAR time-series methods to obtain a ground velocity field in radar line-of-sight (LOS) direction. We correct the InSAR data for different sources of systematic biases including the phase unwrapping errors, local oscillator drift, topographic residuals and stratified tropospheric delay and evaluate the uncertainty due to the residual delay using time-series of MODIS observations of precipitable water vapor. The InSAR velocity field and modeling demonstrates the distribution of deformation across the Chaman fault system. In the central Chaman fault system, the InSAR velocity shows clear strain localization on the Chaman and Ghazaband faults and modeling suggests a total slip rate of ~24 mm/yr distributed on the two faults with rates of 8 and 16 mm/yr, respectively corresponding to the 80% of the total ~3 cm/yr plate motion between India and Eurasia at these latitudes and consistent with the kinematic models which have predicted a slip rate of ~17-24 mm/yr for the Chaman Fault. In the northern Chaman fault system (north of 30.5N), ~6 mm/yr of the relative plate motion is accommodated across Chaman fault. North of 30.5 N where the topographic expression of the Ghazaband fault vanishes, its slip does not transfer to the Chaman fault but rather distributes among different faults in the Kirthar range and Sulaiman lobe. Observed surface creep on the southern Chaman fault between Nushki and north of City of Chaman, indicates that the fault is partially locked, consistent with the recorded M<7 earthquakes in last century on this segment. The Chaman fault between north of the City of Chaman to North of Kabul, does not show an increase in the rate of strain accumulation. However, lack of seismicity on this segment, presents a significant hazard on Kabul. The high rate of strain accumulation on the Ghazaband fault and lack of evidence for the rupture of the fault during the 1935 Quetta earthquake, present a growing earthquake hazard to the Balochistan and the populated areas such as the city of Quetta.

  13. Active backstop faults in the Mentawai region of Sumatra, Indonesia, revealed by teleseismic broadband waveform modeling

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Bradley, Kyle Edward; Wei, Shengji; Wu, Wenbo

    2018-02-01

    Two earthquake sequences that affected the Mentawai islands offshore of central Sumatra in 2005 (Mw 6.9) and 2009 (Mw 6.7) have been highlighted as evidence for active backthrusting of the Sumatran accretionary wedge. However, the geometry of the activated fault planes is not well resolved due to large uncertainties in the locations of the mainshocks and aftershocks. We refine the locations and focal mechanisms of medium size events (Mw > 4.5) of these two earthquake sequences through broadband waveform modeling. In addition to modeling the depth-phases for accurate centroid depths, we use teleseismic surface wave cross-correlation to precisely relocate the relative horizontal locations of the earthquakes. The refined catalog shows that the 2005 and 2009 "backthrust" sequences in Mentawai region actually occurred on steeply (∼60 degrees) landward-dipping faults (Masilo Fault Zone) that intersect the Sunda megathrust beneath the deepest part of the forearc basin, contradicting previous studies that inferred slip on a shallowly seaward-dipping backthrust. Static slip inversion on the newly-proposed fault fits the coseismic GPS offsets for the 2009 mainshock equally well as previous studies, but with a slip distribution more consistent with the mainshock centroid depth (∼20 km) constrained from teleseismic waveform inversion. Rupture of such steeply dipping reverse faults within the forearc crust is rare along the Sumatra-Java margin. We interpret these earthquakes as 'unsticking' of the Sumatran accretionary wedge along a backstop fault separating imbricated material from the stronger Sunda lithosphere. Alternatively, the reverse faults may have originated as pre-Miocene normal faults of the extended continental crust of the western Sunda margin. Our waveform modeling approach can be used to further refine global earthquake catalogs in order to clarify the geometries of active faults.

  14. States of stress and slip partitioning in a continental scale strike-slip duplex: Tectonic and magmatic implications by means of finite element modeling

    NASA Astrophysics Data System (ADS)

    Iturrieta, Pablo Cristián; Hurtado, Daniel E.; Cembrano, José; Stanton-Yonge, Ashley

    2017-09-01

    Orogenic belts at oblique convergent subduction margins accommodate deformation in several trench-parallel domains, one of which is the magmatic arc, commonly regarded as taking up the margin-parallel, strike-slip component. However, the stress state and kinematics of volcanic arcs is more complex than usually recognized, involving first- and second-order faults with distinctive slip senses and mutual interaction. These are usually organized into regional scale strike-slip duplexes, associated with both long-term and short-term heterogeneous deformation and magmatic activity. This is the case of the 1100 km-long Liquiñe-Ofqui Fault System in the Southern Andes, made up of two overlapping margin-parallel master faults joined by several NE-striking second-order faults. We present a finite element model addressing the nature and spatial distribution of stress across and along the volcanic arc in the Southern Andes to understand slip partitioning and the connection between tectonics and magmatism, particularly during the interseismic phase of the subduction earthquake cycle. We correlate the dynamics of the strike-slip duplex with geological, seismic and magma transport evidence documented by previous work, showing consistency between the model and the inferred fault system behavior. Our results show that maximum principal stress orientations are heterogeneously distributed within the continental margin, ranging from 15° to 25° counter-clockwise (with respect to the convergence vector) in the master faults and 10-19° clockwise in the forearc and backarc domains. We calculate the stress tensor ellipticity, indicating simple shearing in the eastern master fault and transpressional stress in the western master fault. Subsidiary faults undergo transtensional-to-extensional stress states. The eastern master fault displays slip rates of 5 to 10 mm/yr, whereas the western and subsidiary faults show slips rates of 1 to 5 mm/yr. Our results endorse that favorably oriented subsidiary faults serve as magma pathways, particularly where they are close to the intersection with a master fault. Also, the slip of a fault segment is enhanced when an adjacent fault kinematics is superimposed on the regional tectonic loading. Hence, finite element models help to understand coupled tectonics and volcanic processes, demonstrating that geological and geophysical observations can be accounted for by a small number of key first order boundary conditions.

  15. Locating Anomalies in Complex Data Sets Using Visualization and Simulation

    NASA Technical Reports Server (NTRS)

    Panetta, Karen

    2001-01-01

    The research goals are to create a simulation framework that can accept any combination of models written at the gate or behavioral level. The framework provides the ability to fault simulate and create scenarios of experiments using concurrent simulation. In order to meet these goals we have had to fulfill the following requirements. The ability to accept models written in VHDL, Verilog or the C languages. The ability to propagate faults through any model type. The ability to create experiment scenarios efficiently without generating every possible combination of variables. The ability to accept adversity of fault models beyond the single stuck-at model. Major development has been done to develop a parser that can accept models written in various languages. This work has generated considerable attention from other universities and industry for its flexibility and usefulness. The parser uses LEXX and YACC to parse Verilog and C. We have also utilized our industrial partnership with Alternative System's Inc. to import vhdl into our simulator. For multilevel simulation, we needed to modify the simulator architecture to accept models that contained multiple outputs. This enabled us to accept behavioral components. The next major accomplishment was the addition of "functional fault models". Functional fault models change the behavior of a gate or model. For example, a bridging fault can make an OR gate behave like an AND gate. This has applications beyond fault simulation. This modeling flexibility will make the simulator more useful for doing verification and model comparison. For instance, two or more versions of an ALU can be comparatively simulated in a single execution. The results will show where and how the models differed so that the performance and correctness of the models may be evaluated. A considerable amount of time has been dedicated to validating the simulator performance on larger models provided by industry and other universities.

  16. Models of recurrent strike-slip earthquake cycles and the state of crustal stress

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.

    1991-01-01

    Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.

  17. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    NASA Astrophysics Data System (ADS)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  18. Interplay of plate convergence and arc migration in the central Mediterranean (Sicily and Calabria)

    NASA Astrophysics Data System (ADS)

    Nijholt, Nicolai; Govers, Rob; Wortel, Rinus

    2016-04-01

    Key components in the current geodynamic setting of the central Mediterranean are continuous, slow Africa-Eurasia plate convergence (~5 mm/yr) and arc migration. This combination encompasses roll-back, tearing and detachment of slabs, and leads to back-arc opening and orogeny. Since ~30 Ma the Apennnines-Calabrian and Gibraltar subduction zones have shaped the western-central Mediterranean region. Lithospheric tearing near slab edges and the accompanying surface expressions (STEP faults) are key in explaining surface dynamics as observed in geologic, geophysical and geodetic data. In the central Mediterranean, both the narrow Calabrian subduction zone and the Sicily-Tyrrhenian offshore thrust front show convergence, with a transfer (shear) zone connecting the distinct SW edge of the former with the less distinct, eastern limit of the latter (similar, albeit on a smaller scale, to the situation in New Zealand with oppositely verging subduction zones and the Alpine fault as the transfer shear zone). The ~NNW-SSE oriented transfer zone (Aeolian-Sisifo-Tindari(-Ionian) fault system) shows transtensive-to-strike slip motion. Recent seismicity, geological data and GPS vectors in the central Mediterranean indicate that the region can be subdivided into several distinct domains, both on- and offshore, delineated by deformation zones and faults. However, there is discussion about the (relative) importance of some of these faults on the lithospheric scale. We focus on finding the best-fitting assembly of faults for the transfer zone connecting subduction beneath Calabria and convergence north of Sicily in the Sicily-Tyrrhenian offshore thrust front. This includes determining whether the Alfeo-Etna fault, Malta Escarpment and/or Ionian fault, which have all been suggested to represent the STEP fault of the Calabrian subduction zone, are key in describing the observed deformation patterns. We first focus on the present-day. We use geodynamic models to reproduce observed GPS velocities in the Sicily-Calabria region. In these models, we combine far-field velocity boundary conditions, GPE-related body forces, and slab pull/trench suction at the subduction contacts. The location and nature of model faults are based on geological and seismicity observations, and as these faults do not fully enclose blocks our models require both fault slip and distributed strain. We vary fault friction in the models. Extrapolating the (short term) model results to geological time scales, we are able to make a first-order assessment of the regional strain and block rotations resulting from the interplay of arc migration and plate convergence during the evolution of this complex region.

  19. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  20. Constraints on the stress state of the San Andreas fault with analysis based on core and cuttings from SAFOD drilling phases I and II

    USGS Publications Warehouse

    Lockner, David A.; Tembe, Cheryl; Wong, Teng-fong

    2009-01-01

    Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (m < 0.2) or strength consistent with standard laboratory tests (m > 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature- and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (m0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress.

  1. ASCS online fault detection and isolation based on an improved MPCA

    NASA Astrophysics Data System (ADS)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  2. SAR-revealed slip partitioning on a bending fault plane for the 2014 Northern Nagano earthquake at the northern Itoigawa-Shizuoka tectonic line

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tomokazu; Morishita, Yu; Yarai, Hiroshi

    2018-05-01

    By applying conventional cross-track synthetic aperture radar interferometry (InSAR) and multiple aperture InSAR techniques to ALOS-2 data acquired before and after the 2014 Northern Nagano, central Japan, earthquake, a three-dimensional ground displacement field has been successfully mapped. Crustal deformation is concentrated in and around the northern part of the Kamishiro Fault, which is the northernmost section of the Itoigawa-Shizuoka tectonic line. The full picture of the displacement field shows contraction in the northwest-southeast direction, but northeastward movement along the fault strike direction is prevalent in the northeast portion of the fault, which suggests that a strike-slip component is a significant part of the activity of this fault, in addition to a reverse faulting. Clear displacement discontinuities are recognized in the southern part of the source region, which falls just on the previously known Kamishiro Fault trace. We inverted the SAR and GNSS data to construct a slip distribution model; the preferred model of distributed slip on a two-plane fault surface shows a combination of reverse and left-lateral fault motions on a bending east-dipping fault surface with a dip of 30° in the shallow part and 50° in the deeper part. The hypocenter falls just on the estimated deeper fault plane where a left-lateral slip is inferred, whereas in the shallow part, a reverse slip is predominant, which causes surface ruptures on the ground. The slip partitioning may be accounted for by shear stress resulting from a reverse fault slip with left-lateral component at depth, for which a left-lateral slip is suppressed in the shallow part where the reverse slip is inferred. The slip distribution model with a bending fault surface, instead of a single fault plane, produces moment tensor solution with a non-double couple component, which is consistent with the seismically estimated mechanism.

  3. Seismotectonics and fault structure of the California Central Coast

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2010-01-01

    I present and interpret new earthquake relocations and focal mechanisms for the California Central Coast. The relocations improve upon catalog locations by using 3D seismic velocity models to account for lateral variations in structure and by using relative arrival times from waveform cross-correlation and double-difference methods to image seismicity features more sharply. Focal mechanisms are computed using ray tracing in the 3D velocity models. Seismicity alignments on the Hosgri fault confirm that it is vertical down to at least 12 km depth, and the focal mechanisms are consistent with right-lateral strike-slip motion on a vertical fault. A prominent, newly observed feature is an ~25 km long linear trend of seismicity running just offshore and parallel to the coastline in the region of Point Buchon, informally named the Shoreline fault. This seismicity trend is accompanied by a linear magnetic anomaly, and both the seismicity and the magnetic anomaly end where they obliquely meet the Hosgri fault. Focal mechanisms indicate that the Shoreline fault is a vertical strike-slip fault. Several seismicity lineations with vertical strike-slip mechanisms are observed in Estero Bay. Events greater than about 10 km depth in Estero Bay, however, exhibit reverse-faulting mechanisms, perhaps reflecting slip at the top of the remnant subducted slab. Strike-slip mechanisms are observed offshore along the Hosgri–San Simeon fault system and onshore along the West Huasna and Rinconada faults, while reverse mechanisms are generally confined to the region between these two systems. This suggests a model in which the reverse faulting is primarily due to restraining left-transfer of right-lateral slip.

  4. Fault-related fold styles and progressions in fold-thrust belts: Insights from sandbox modeling

    NASA Astrophysics Data System (ADS)

    Yan, Dan-Ping; Xu, Yan-Bo; Dong, Zhou-Bin; Qiu, Liang; Zhang, Sen; Wells, Michael

    2016-03-01

    Fault-related folds of variable structural styles and assemblages commonly coexist in orogenic belts with competent-incompetent interlayered sequences. Despite their commonality, the kinematic evolution of these structural styles and assemblages are often loosely constrained because multiple solutions exist in their structural progression during tectonic restoration. We use a sandbox modeling instrument with a particle image velocimetry monitor to test four designed sandbox models with multilayer competent-incompetent materials. Test results reveal that decollement folds initiate along selected incompetent layers with decreasing velocity difference and constant vorticity difference between the hanging wall and footwall of the initial fault tips. The decollement folds are progressively converted to fault-propagation folds and fault-bend folds through development of fault ramps breaking across competent layers and are followed by propagation into fault flats within an upper incompetent layer. Thick-skinned thrust is produced by initiating a decollement fault within the metamorphic basement. Progressive thrusting and uplifting of the thick-skinned thrust trigger initiation of the uppermost incompetent decollement with formation of a decollement fold and subsequent converting to fault-propagation and fault-bend folds, which combine together to form imbricate thrust. Breakouts at the base of the early formed fault ramps along the lowest incompetent layers, which may correspond to basement-cover contacts, domes the upmost decollement and imbricate thrusts to form passive roof duplexes and constitute the thin-skinned thrust belt. Structural styles and assemblages in each of tectonic stages are similar to that in the representative orogenic belts in the South China, Southern Appalachians, and Alpine orogenic belts.

  5. Topographically driven groundwater flow and the San Andreas heat flow paradox revisited

    USGS Publications Warehouse

    Saffer, D.M.; Bekins, B.A.; Hickman, S.

    2003-01-01

    Evidence for a weak San Andreas Fault includes (1) borehole heat flow measurements that show no evidence for a frictionally generated heat flow anomaly and (2) the inferred orientation of ??1 nearly perpendicular to the fault trace. Interpretations of the stress orientation data remain controversial, at least in close proximity to the fault, leading some researchers to hypothesize that the San Andreas Fault is, in fact, strong and that its thermal signature may be removed or redistributed by topographically driven groundwater flow in areas of rugged topography, such as typify the San Andreas Fault system. To evaluate this scenario, we use a steady state, two-dimensional model of coupled heat and fluid flow within cross sections oriented perpendicular to the fault and to the primary regional topography. Our results show that existing heat flow data near Parkfield, California, do not readily discriminate between the expected thermal signature of a strong fault and that of a weak fault. In contrast, for a wide range of groundwater flow scenarios in the Mojave Desert, models that include frictional heat generation along a strong fault are inconsistent with existing heat flow data, suggesting that the San Andreas Fault at this location is indeed weak. In both areas, comparison of modeling results and heat flow data suggest that advective redistribution of heat is minimal. The robust results for the Mojave region demonstrate that topographically driven groundwater flow, at least in two dimensions, is inadequate to obscure the frictionally generated heat flow anomaly from a strong fault. However, our results do not preclude the possibility of transient advective heat transport associated with earthquakes.

  6. Near-surface structural model for deformation associated with the February 7, 1812, New Madrid, Missouri, earthquake

    USGS Publications Warehouse

    Odum, J.K.; Stephenson, W.J.; Shedlock, K.M.; Pratt, T.L.

    1998-01-01

    The February 7, 1812, New Madrid, Missouri, earthquake (M [moment magnitude] 8) was the third and final large-magnitude event to rock the northern Mississippi Embayment during the winter of 1811-1812. Although ground shaking was so strong that it rang church bells, stopped clocks, buckled pavement, and rocked buildings up and down the eastern seaboard, little coseismic surface deformation exists today in the New Madrid area. The fault(s) that ruptured during this event have remained enigmatic. We have integrated geomorphic data documenting differential surficial deformation (supplemented by historical accounts of surficial deformation and earthquake-induced Mississippi River waterfalls and rapids) with the interpretation of existing and recently acquired seismic reflection data, to develop a tectonic model of the near-surface structures in the New Madrid, Missouri, area. This model consists of two primary components: a northnorthwest-trending thrust fault and a series of northeast-trending, strike-slip, tear faults. We conclude that the Reelfoot fault is a thrust fault that is at least 30 km long. We also infer that tear faults in the near surface partitioned the hanging wall into subparallel blocks that have undergone differential displacement during episodes of faulting. The northeast-trending tear faults bound an area documented to have been uplifted at least 0.5 m during the February 7, 1812, earthquake. These faults also appear to bound changes in the surface density of epicenters that are within the modern seismicity, which is occurring in the stepover zone of the left-stepping right-lateral strike-slip fault system of the modern New Madrid seismic zone.

  7. Development and Testing of Protection Scheme for Renewable-Rich Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahma, Sukumar; Ranade, Satish; Elkhatib, Mohamed E.

    As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.

  8. Faults simulations for three-dimensional reservoir-geomechanical models with the extended finite element method

    NASA Astrophysics Data System (ADS)

    Prévost, Jean H.; Sukumar, N.

    2016-01-01

    Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.

  9. Fault classification method for the driving safety of electrified vehicles

    NASA Astrophysics Data System (ADS)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  10. Fault interaction and stresses along broad oceanic transform zone: Tjörnes Fracture Zone, north Iceland

    NASA Astrophysics Data System (ADS)

    Homberg, C.; Bergerat, F.; Angelier, J.; Garcia, S.

    2010-02-01

    Transform motion along oceanic transforms generally occurs along narrow faults zones. Another class of oceanic transforms exists where the plate boundary is quite large (˜100 km) and includes several subparallel faults. Using a 2-D numerical modeling, we simulate the slip distribution and the crustal stress field geometry within such broad oceanic transforms (BOTs). We examine the possible configurations and evolution of such BOTs, where the plate boundary includes one, two, or three faults. Our experiments show that at any time during the development of the plate boundary, the plate motion is not distributed along each of the plate boundary faults but mainly occurs along a single master fault. The finite width of a BOT results from slip transfer through time with locking of early faults, not from a permanent distribution of deformation over a wide area. Because of fault interaction, the stress field geometry within the BOTs is more complex than that along classical oceanic transforms and includes stress deflections close to but also away from the major faults. Application of this modeling to the 100 km wide Tjörnes Fracture Zone (TFZ) in North Iceland, a major BOT of the Mid-Atlantic Ridge that includes three main faults, suggests that the Dalvik Fault and the Husavik-Flatey Fault developed first, the Grismsey Fault being the latest active structure. Since initiation of the TFZ, the Husavik-Flatey Fault accommodated most of the plate motion and probably persists until now as the main plate structure.

  11. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  12. Deformation rates across the San Andreas Fault system, central California determined by geology and geodesy

    NASA Astrophysics Data System (ADS)

    Titus, Sarah J.

    The San Andreas fault system is a transpressional plate boundary characterized by sub-parallel dextral strike-slip faults separating internally deformed crustal blocks in central California. Both geodetic and geologic tools were used to understand the short- and long-term partitioning of deformation in both the crust and the lithospheric mantle across the plate boundary system. GPS data indicate that the short-term discrete deformation rate is ˜28 mm/yr for the central creeping segment of the San Andreas fault and increases to 33 mm/yr at +/-35 km from the fault. This gradient in deformation rates is interpreted to reflect elastic locking of the creeping segment at depth, distributed off-fault deformation, or some combination of these two mechanisms. These short-term fault-parallel deformation rates are slower than the expected geologic slip rate and the relative plate motion rate. Structural analysis of folds and transpressional kinematic modeling were used to quantify long-term distributed deformation adjacent to the Rinconada fault. Folding accommodates approximately 5 km of wrench deformation, which translates to a deformation rate of ˜1 mm/yr since the start of the Pliocene. Integration with discrete offset on the Rinconada fault indicates that this portion of the San Andreas fault system is approximately 80% strike-slip partitioned. This kinematic fold model can be applied to the entire San Andreas fault system and may explain some of the across-fault gradient in deformation rates recorded by the geodetic data. Petrologic examination of mantle xenoliths from the Coyote Lake basalt near the Calaveras fault was used to link crustal plate boundary deformation at the surface with models for the accommodation of deformation in the lithospheric mantle. Seismic anisotropy calculations based on xenolith petrofabrics suggest that an anisotropic mantle layer thickness of 35-85 km is required to explain the observed shear wave splitting delay times in central California. The available data are most consistent with models for a broad zone of distributed deformation in the lithospheric mantle.

  13. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  14. Paleoearthquakes of the past ~2500 years at the Dead Mouse site, west-central Denali fault at the Nenana River, Alaska

    NASA Astrophysics Data System (ADS)

    Carlson, K.; Bemis, S. P.; Toke, N. A.; Bishop, B.; Taylor, P.

    2015-12-01

    Understanding the record of earthquakes along the Denali Fault (DF) is important for resource and infrastructure development and presents the potential to test earthquake rupture models in a tectonic environment with a larger ratio of event recurrence to geochronological uncertainty than well studied plate boundary faults such as the San Andreas. However, the fault system is over 1200 km in length and has proven challenging to identify paleoseismic sites that preserve more than 2-3 Paleoearthquakes (PEQ). In 2012 and 2015 we developed the 'Dead Mouse' site, providing the first long PEQ record west of the 2002 rupture extent. This site is located on the west-central segment of the DF near the southernmost intersection of the George Parks Hwy and the Nenana River (63.45285, -148.80249). We hand-excavated three fault-perpendicular trenches, including a fault-parallel extension that we excavated and recorded in a progressive sequence. We used Structure from Motion software to build mm-scale 3D models of the exposures. These models allowed us to produce orthorectified photomosaics for hand logging at 1:5 scale. We document evidence for 4-5 surface rupturing earthquakes that have deformed the upper 2.5 m of stratigraphy. Age control from our preliminary 2012 investigation indicates these events occurred within the past ~2,500 years. Evidence for these events include offset units, filled fissures, upward fault terminations, angular unconformities and minor scarp-derived colluvial deposits. Multiple lines of evidence from the primary fault zones and fault splays are apparent for each event. We are testing these correlations by constructing a georeferenced 3D site model and running an additional 20 geochronology samples including woody macrofossils, detrital and in-situ charcoal, and samples for post-IR IRSL from positions that should closely constrain stratigraphic evidence for earthquakes. We expect this long PEQ history to provide a critical test for future modeling of recurrence and fault segmentation on the DF.

  15. Product quality management based on CNC machine fault prognostics and diagnosis

    NASA Astrophysics Data System (ADS)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  16. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  17. Displacement-length relationship of normal faults in Acheron Fossae, Mars: new observations with HRSC.

    NASA Astrophysics Data System (ADS)

    Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.

    2007-08-01

    For Earth, data sets and models have shown that for a fault loaded by a constant remote stress, the maximum displacement on the fault is linearly related to its length by d = gamma · l [1]. The scaling and structure is self-similar through time [1]. The displacement-length relationship can provide useful information about the tectonic regime.We intend to use it to estimate the seismic moment released during the formation of Martian fault systems and to improve the seismicity model [2]. Only few data sets have been measured for extraterrestrial faults. One reason is the limited number of reliable topographic data sets. We used high-resolution Digital Elevation Models (DEM) [3] derived from HRSC image data taken from Mars Express orbit 1437. This orbit covers an area in the Acheron Fossae region, a rift-like graben system north of Olympus Mons with a "banana"-shaped topography [4]. It has a fault trend which runs approximately WNW-ESE. With an interactive IDL-based software tool [5] we measured the fault length and the vertical offset for 34 faults. We evaluated the height profile by plotting the fault lengths l vs. their observed maximum displacement (dmax-model). Additionally, we computed the maximum displacement of an elliptical fault scarp where the plane has the same area as in the observed case (elliptical model). The integration over the entire fault length necessary for the computation of the area supresses the "noise" introduced by local topographic effects like erosion or cratering. We should also mention that fault planes dipping 60 degree are usually assumed for Mars [e.g., 6] and even shallower dips have been found for normal fault planes [7]. This dip angle is used to compute displacement from vertical offset via d = h/(h*sinα), where h is the observed topographic step height, and ? is the fault dip angle. If fault dip angles of 30 degree are considered, the displacement differs by 40% from the one of dip angles of 60 degree. Depending on the data quality, especially the lighting conditions in the region, different errors can be made by determining the various values. Based on our experiences, we estimate that the error measuring the length of the fault is smaller than 10% and that the measurement error of the offset is smaller than 5%. Furthermore the horizontal resolution of the HRSC images is 12.5 m/pixel or 25 m/pixel and of the DEM derived from HRSC images 50 m/pixel because of re-sampling. That means that image resolution does not introduce a significant error at fault lengths in kilometer range. For the case of Mars it is known that in the growth of fault populations linkage is an essential process [8]. We obtained the d/l-values from selected examples of faults that were connected via a relay ramp. The error of ignoring an existing fault linkage is 20% to 50% if the elliptical fault model is used and 30% to 50% if only the dmax value is used to determine d l . This shows an advantage of the elliptic model. The error increases if more faults are linked, because the underestimation of the relevant length gets worse the longer the linked system is. We obtained a value of gamma=d/l of about 2 · 10-2 for the elliptic model and a value of approximately 2.7 · 10-2 for the dmax-model. The data show a relatively large scatter, but they can be compared to data from terrestrial faults ( d/l= ~1 · 10-2...5 · 10-2; [9] and references therein). In a first inspection of the Acheron Fossae 2 region in the orbit 1437 we could confirm our first observations [10]. If we consider fault linkage the d/l values shift towards lower d/l-ratios, since linkage means that d remains essentially constant, but l increases significantly. We will continue to measure other faults and obtain values for linked faults and relay ramps. References: [1] Cowie, P. A. and Scholz, C. H. (1992) JSG, 14, 1133-1148. [2] Knapmeyer, M. et al. (2006) JGR, 111, E11006. [3] Neukum, G. et al. (2004) ESA SP-1240, 17-35. [4] Kronberg, P. et al. (2007) J. Geophys. Res., 112, E04005, doi:10.1029/2006JE002780. [5] Hauber, E. et al. (2007) LPSC, XXXVIII, abstract 1338. [6] Wilkins, S. J. et al. (2002) GRL, 29, 1884, doi: 10.1029/2002GL015391. [7] Fueten, F. et al. (2007) LPSC, XXXVIII, abstract 1388. [8] Schultz, R. A. (2000) Tectonophysics, 316, 169-193. [9] Schultz, R. A. et al. (2006) JSG, 28, 2182-2193. [10] Hauber, E. et al. (2007) 7th Mars Conference, submitted.

  18. Final Project Report. Scalable fault tolerance runtime technology for petascale computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Sadayappan, P

    With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been amore » considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.« less

  19. Comparative study of superconducting fault current limiter both for LCC-HVDC and VSC-HVDC systems

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Lim, Sung-Woo; Shin, Woo-ju; Seo, In-Jin; Lee, Bang-Wook

    2015-11-01

    High Voltage Direct Current (HVDC) system has been evaluated as the optimum solution for the renewable energy transmission and long-distance power grid connections. In spite of the various advantages of HVDC system, it still has been regarded as an unreliable system compared to AC system due to its vulnerable characteristics on the power system fault. Furthermore, unlike AC system, optimum protection and switching device has not been fully developed yet. Therefore, in order to enhance the reliability of the HVDC systems mitigation of power system fault and reliable fault current limiting and switching devices should be developed. In this paper, in order to mitigate HVDC fault, both for Line Commutated Converter HVDC (LCC-HVDC) and Voltage Source Converter HVDC (VSC-HVDC) system, an application of resistive superconducting fault current limiter which has been known as optimum solution to cope with the power system fault was considered. Firstly, simulation models for two types of LCC-HVDC and VSC-HVDC system which has point to point connection model were developed. From the designed model, fault current characteristics of faulty condition were analyzed. Second, application of SFCL on each types of HVDC system and comparative study of modified fault current characteristics were analyzed. Consequently, it was deduced that an application of AC-SFCL on LCC-HVDC system with point to point connection was desirable solution to mitigate the fault current stresses and to prevent commutation failure in HVDC electric power system interconnected with AC grid.

  20. Implications of the earthquake cycle for inferring fault locking on the Cascadia megathrust

    USGS Publications Warehouse

    Pollitz, Fred; Evans, Eileen

    2017-01-01

    GPS velocity fields in the Western US have been interpreted with various physical models of the lithosphere-asthenosphere system: (1) time-independent block models; (2) time-dependent viscoelastic-cycle models, where deformation is driven by viscoelastic relaxation of the lower crust and upper mantle from past faulting events; (3) viscoelastic block models, a time-dependent variation of the block model. All three models are generally driven by a combination of loading on locked faults and (aseismic) fault creep. Here we construct viscoelastic block models and viscoelastic-cycle models for the Western US, focusing on the Pacific Northwest and the earthquake cycle on the Cascadia megathrust. In the viscoelastic block model, the western US is divided into blocks selected from an initial set of 137 microplates using the method of Total Variation Regularization, allowing potential trade-offs between faulting and megathrust coupling to be determined algorithmically from GPS observations. Fault geometry, slip rate, and locking rates (i.e. the locking fraction times the long term slip rate) are estimated simultaneously within the TVR block model. For a range of mantle asthenosphere viscosity (4.4 × 1018 to 3.6 × 1020 Pa s) we find that fault locking on the megathrust is concentrated in the uppermost 20 km in depth, and a locking rate contour line of 30 mm yr−1 extends deepest beneath the Olympic Peninsula, characteristics similar to previous time-independent block model results. These results are corroborated by viscoelastic-cycle modelling. The average locking rate required to fit the GPS velocity field depends on mantle viscosity, being higher the lower the viscosity. Moreover, for viscosity ≲ 1020 Pa s, the amount of inferred locking is higher than that obtained using a time-independent block model. This suggests that time-dependent models for a range of admissible viscosity structures could refine our knowledge of the locking distribution and its epistemic uncertainty.

  1. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  2. Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.

    2014-12-01

    Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available for further slip and for subsequent earthquakes. This suite of models reveals that efficiency may be a useful tool for determining the relative seismic hazard of different segmented fault systems, while accounting for coseismic damage zone production is critical in assessing fault interactions and the associated energy budgets of specific systems.

  3. Towards a Fault-based SHA in the Southern Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Baize, Stéphane; Reicherter, Klaus; Thomas, Jessica; Chartier, Thomas; Cushing, Edward Marc

    2016-04-01

    A brief overview at a seismic map of the Upper Rhine Graben area (say between Strasbourg and Basel) reveals that the region is seismically active. The area has been hit recently by shallow and moderate quakes but, historically, strong quakes damaged and devastated populated zones. Several authors previously suggested, through preliminary geomorphological and geophysical studies, that active faults could be traced along the eastern margin of the graben. Thus, fault-based PSHA (probabilistic seismic hazard assessment) studies should be developed. Nevertheless, most of the input data in fault-based PSHA models are highly uncertain, based upon sparse or hypothetical data. Geophysical and geological data document the presence of post-Tertiary westward dipping faults in the area. However, our first investigations suggest that the available surface fault map do not provide a reliable document of Quaternary fault traces. Slip rate values that can be currently used in fault-PSHA models are based on regional stratigraphic data, but these include neither detailed datings nor clear base surface contours. Several hints on fault activity do exist and we have now relevant tools and techniques to figure out the activity of the faults of concern. Our preliminary analyses suggest that the LiDAR topography can adequately image the fault segments and, thanks to detailed geomorphological analysis, these data allow tracking cumulative fault offsets. Because the fault models can therefore be considered highly uncertain, our coming project for the next 3 years is to acquire and analyze these accurate topographical data, to trace the active faults and to determine slip rates through relevant features dating. Eventually, we plan to find a key site to perform a paleoseismological trench because this approach has been proved to be worth in the Graben, both to the North (Wörms and Strasbourg) and to the South (Basel). This would be done in order to definitely prove whether the faults ruptured the ground surface during the Quaternary, and in order to determine key fault parameters such as magnitude and age of large events.

  4. The balance of frictional heat production, thermal pressurization, and slip resistance on exhumed mid-crustal faults (Adamello batholith, Southern Italian Alps)

    NASA Astrophysics Data System (ADS)

    Griffith, W. A.; di Toro, G.; Pollard, D. D.

    2005-12-01

    Exhumed faults cutting the Adamello batholith (Italian Alps) were active ca. 30 Ma at seismogenic depths of 9-11 km. The faults "exploited preexisting joints and can be classified into three groups containing: (A) only cataclasite (a fault rock with no evidence of melting), (B) cataclasite and pseudotachylyte (solidified friction-induced melts produced during earthquakes), and (C) only pseudotachylyte. The majority of pseudotachylyte-bearing faults in this outcrop overprint pre-existing cataclasites (Type B), suggesting a transition between slip styles; however, some faults exhibiting pseudotachylyte and no cataclasite (Type C) display evidence of only one episode of slip. Faults of Type A never transitioned to frictional melting. We attempt to compare faults of type A, B, and C in terms of a simple one-dimensional thermo-mechanical model introduced by Lachenbruch (1980) describing the interaction between frictional heating, pore fluid pressure, and shear resistance during slip. The interaction of these three parameters influences how much elastic strain is relieved during an earthquake. For a conceptualized fault zone of finite thickness, the interplay between the shear resistance, heat production, and pore fluid pressure can be expressed as a non-linear partial differential equation relating these processes to the strain rate acting within a fault zone during a slip event. The behavior of fault zones in terms of these coupled processes during an earthquake depends on a number of parameters, such as thickness of the principal slipping zone, net coseismic slip, fault rock permeability and thermal diffusivity. Ideally, the governing equations should be testable on real fault zones if the requisite parameters can be measured or reasonably estimated. The model can be further simplified if the peak temperature reached during slip and the coseismic slip rate can be constrained. The contrasting nature of slip on the three Adamello fault types highlights (1) important differences between slip processes on cataclastic and melt-producing faults at depth and (2) some limitations of applicability of such models to real faults.

  5. Modeling and characterization of partially inserted electrical connector faults

    NASA Astrophysics Data System (ADS)

    Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.

    2016-03-01

    Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.

  6. Model uncertainties of the 2002 update of California seismic hazard maps

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.; Frankel, A.D.

    2005-01-01

    In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.

  7. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    NASA Astrophysics Data System (ADS)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  8. Phase response curves for models of earthquake fault dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franović, Igor, E-mail: franovic@ipb.ac.rs; Kostić, Srdjan; Perc, Matjaž

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how themore » profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.« less

  9. Long-term strain accommodation in the eastern margin of the Tibetan Plateau: Insights from 3D thermo-kinematic modelling

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Vermeesch, P.; Carter, A.; Zhang, P.

    2017-12-01

    The Cenozoic deformation of the Tibetan Plateau were dominated by the north-south collision between the Indian and Eurasian continents since Early Cenozoic time. Numerous lines of evidence suggest that the plateau has expanded outward after the collision, forming a diverging stress-regime from the collisional belt to plateau margins. When and how the expansional strain had propagated to the current plateau margins has been hotly debated. This work presents results of an on-going projects for understanding the long-term strain history along the Longmen Shan, the eastern margin of the Tibetan Plateau, where deformation is controlled by three parallel NW-dipping faults. From foreland (southeast) to hinterland (northwest), the main faults are the Guanxian-Anxian fault, Yingxiu-Beichuan fault and Wenchuan-Maowen fault. Exhumation pattern constrained by one-dimensional modelling made from a compilation of published and unpublished thermochronometry data shows a strong structural control, with highest amounts of exhumation in the hinterland region, a pattern that is characteristic of out-of-sequence reverse faulting (Tian et al., 2013, Tectonics, doi:10.1002/tect.20043; Tian et al., 2015, Geophys. Res. Lett., doi:10.1002/2014GL062383). Three-dimensional thermo-kinematic modelling of these data suggests that the Longmen Shan faults are listric in geometry, merging into a detachment at a depth of 20-30 km. The models require a marked decrease in slip-rate along the frontal Yingxiu-Beichuan in the late Miocene, whereas the slip-rate along the hinterland Wenchuan-Maowen fault remained relatively constant since early Miocene time. The long-term pattern of strain accommodation revealed by the three-dimensional thermo-kinematic modelling have important implications for distinguishing geodynamic models proposed for explaining the eastward growth of the Tibetan Plateau.

  10. Fault kinematics and active tectonics of the Sabah margin: Insights from the 2015, Mw 6.0, Mt. Kinabalu earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Wei, S.; Tapponnier, P.; WANG, X.; Lindsey, E.; Sieh, K.

    2016-12-01

    A gravity-driven "Mega-Landslide" model has been evoked to explain the shortening seen offshore Sabah and Brunei in oil-company seismic data. Although this model is considered to account simultaneously for recent folding at the edge of the submarine NW Sabah trough and normal faulting on the Sabah shelf, such a gravity-driven model is not consistent with geodetic data or critical examination of extant structural restorations. The rupture that produced the 2015 Mw6.0 Mt. Kinabalu earthquake is also inconsistent with the gravity-driven model. Our teleseismic analysis shows that the centroid depth of that earthquake's mainshock was 13 to 14 km, and its favored fault-plane solution is a 60° NW-dipping normal fault. Our finite-rupture model exhibits major fault slip between 5 and 15 km depth, in keeping with our InSAR analysis, which shows no appreciable surface deformation. Both the hypocentral depth and the depth of principal slip are far too deep to be explained by gravity-driven failure, as such a model would predict a listric normal fault connecting at a much shallower depth with a very gentle detachment. Our regional mapping of tectonic landforms also suggests the recent rupture is part of a 200-km long system of narrowly distributed active extension in northern Sabah. Taken together, the nature of the 2015 rupture, the belt of active normal faults, and structural consideration indicate that active tectonic shortening plays the leading role in controlling the overall deformation of northern Sabah and that deep-seated, onland normal faulting likely results from an abrupt change in the dip-angle of the collision interface beneath the Sabah accretionary prism.

  11. Seismological analyses of the 2010 March 11, Pichilemu, Chile Mw 7.0 and Mw 6.9 coastal intraplate earthquakes

    USGS Publications Warehouse

    Ruiz, Javier A.; Hayes, Gavin P.; Carrizo, Daniel; Kanamori, Hiroo; Socquet, Anne; Comte, Diana

    2014-01-01

    On 2010 March 11, a sequence of large, shallow continental crust earthquakes shook central Chile. Two normal faulting events with magnitudes around Mw 7.0 and Mw 6.9 occurred just 15 min apart, located near the town of Pichilemu. These kinds of large intraplate, inland crustal earthquakes are rare above the Chilean subduction zone, and it is important to better understand their relationship with the 2010 February 27, Mw 8.8, Maule earthquake, which ruptured the adjacent megathrust plate boundary. We present a broad seismological analysis of these earthquakes by using both teleseismic and regional data. We compute seismic moment tensors for both events via a W-phase inversion, and test sensitivities to various inversion parameters in order to assess the stability of the solutions. The first event, at 14 hr 39 min GMT, is well constrained, displaying a fault plane with strike of N145°E, and a preferred dip angle of 55°SW, consistent with the trend of aftershock locations and other published results. Teleseismic finite-fault inversions for this event show a large slip zone along the southern part of the fault, correlating well with the reported spatial density of aftershocks. The second earthquake (14 hr 55 min GMT) appears to have ruptured a fault branching southward from the previous ruptured fault, within the hanging wall of the first event. Modelling seismograms at regional to teleseismic distances (Δ > 10°) is quite challenging because the observed seismic wave fields of both events overlap, increasing apparent complexity for the second earthquake. We perform both point- and extended-source inversions at regional and teleseismic distances, assessing model sensitivities resulting from variations in fault orientation, dimension, and hypocentre location. Results show that the focal mechanism for the second event features a steeper dip angle and a strike rotated slightly clockwise with respect to the previous event. This kind of geological fault configuration, with secondary rupture in the hanging wall of a large normal fault, is commonly observed in extensional geological regimes. We propose that both earthquakes form part of a typical normal fault diverging splay, where the secondary fault connects to the main fault at depth. To ascertain more information on the spatial and temporal details of slip for both events, we gathered near-fault seismological and geodetic data. Through forward modelling of near-fault synthetic seismograms we build a kinematic k−2 earthquake source model with spatially distributed slip on the fault that, to first-order, explains both coseismic static displacement GPS vectors and short-period seismometer observations at the closest sites. As expected, the results for the first event agree with the focal mechanism derived from teleseismic modelling, with a magnitude Mw 6.97. Similarly, near-fault modelling for the second event suggests rupture along a normal fault, Mw 6.90, characterized by a steeper dip angle (dip = 74°) and a strike clockwise rotated (strike = 155°) with respect to the previous event.

  12. PV Systems Reliability Final Technical Report: Ground Fault Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavrova, Olga; Flicker, Jack David; Johnson, Jay

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  13. 3D Reservoir Modeling of Semutang Gas Field: A lonely Gas field in Chittagong-Tripura Fold Belt, with Integrated Well Log, 2D Seismic Reflectivity and Attributes.

    NASA Astrophysics Data System (ADS)

    Salehin, Z.; Woobaidullah, A. S. M.; Snigdha, S. S.

    2015-12-01

    Bengal Basin with its prolific gas rich province provides needed energy to Bangladesh. Present energy situation demands more Hydrocarbon explorations. Only 'Semutang' is discovered in the high amplitude structures, where rest of are in the gentle to moderate structures of western part of Chittagong-Tripura Fold Belt. But it has some major thrust faults which have strongly breached the reservoir zone. The major objectives of this research are interpretation of gas horizons and faults, then to perform velocity model, structural and property modeling to obtain reservoir properties. It is needed to properly identify the faults and reservoir heterogeneities. 3D modeling is widely used to reveal the subsurface structure in faulted zone where planning and development drilling is major challenge. Thirteen 2D seismic and six well logs have been used to identify six gas bearing horizons and a network of faults and to map the structure at reservoir level. Variance attributes were used to identify faults. Velocity model is performed for domain conversion. Synthetics were prepared from two wells where sonic and density logs are available. Well to seismic tie at reservoir zone shows good match with Direct Hydrocarbon Indicator on seismic section. Vsh, porosity, water saturation and permeability have been calculated and various cross plots among porosity logs have been shown. Structural modeling is used to make zone and layering accordance with minimum sand thickness. Fault model shows the possible fault network, those liable for several dry wells. Facies model have been constrained with Sequential Indicator Simulation method to show the facies distribution along the depth surfaces. Petrophysical models have been prepared with Sequential Gaussian Simulation to estimate petrophysical parameters away from the existing wells to other parts of the field and to observe heterogeneities in reservoir. Average porosity map for each gas zone were constructed. The outcomes of the research are an improved subsurface image of the seismic data (model), a porosity prediction for the reservoir, a reservoir quality map and also a fault map. The result shows a complex geologic model which may contribute to the economic potential of the field. For better understanding, 3D seismic survey, uncertainty and attributes analysis are necessary.

  14. The influence of normal fault geometry on porous sandstone deformation: Insights from mechanical models into conditions leading to Coulomb failure and shear-enhanced compaction

    NASA Astrophysics Data System (ADS)

    Allison, K.; Reinen, L. A.

    2011-12-01

    Slip on non-planar faults produces stress perturbations in the surrounding host rock that can yield secondary faults at a scale too small to be resolved on seismic surveys. Porosity changes during failure may affect the ability of the rock to transmit fluids through dilatant cracking or, in porous rocks, shear-enhanced compaction (i.e., cataclastic flow). Modeling the mechanical behavior of the host rock in response to slip on non-planar faults can yield insights into the role of fault geometry on regions of enhanced or inhibited fluid flow. To evaluate the effect of normal fault geometry on deformation in porous sandstones, we model the system as a linear elastic, homogeneous, whole or half space using the boundary-element modeling program Poly3D. We consider conditions leading to secondary deformation using the maximum Coulomb shear stress (MCSS) as an index of brittle deformation and proximity to an elliptical yield envelope (Y), determined experimentally for porous sandstone (Baud et al., JGR, 2006), for cataclastic flow. We model rectangular faults consisting of two segments: an upper leg with a constant dip of 60° and a lower leg with dips ranging 15-85°. We explore far-field stress models of constant and gradient uniaxial strain. We investigate the potential damage in the host rock in two ways: [1] the size of the damage zone, and [2] regions of enhanced deformation indicated by elevated MCSS or Y. Preliminary results indicate that, along a vertical transect passing through the fault kink, [1] the size of the damage zone increases in the footwall with increasing lower leg dip and remains constant in the hanging wall. [2] In the footwall, the amount of deformation does not change as a function of lower leg dip in constant stress models; in gradient stress models, both MCSS and Y increase with dip. In the hanging wall, Y decreases with increasing lower leg dip for both constant and gradient stress models. In contrast, MCSS increases: as lower leg dip increases for constant stress models, and as the difference between lower leg dip and 60° increases for gradient stress models. These preliminary results indicate that the dip of the lower fault segment significantly affects the amount and style of deformation in the host rock.

  15. Analysis of a Complex Faulted CO 2 Reservoir Using a Three-dimensional Hydro-geochemical-Mechanical Approach

    DOE PAGES

    Nguyen, Ba Nghiep; Hou, Zhangshuan; Bacon, Diana H.; ...

    2017-08-18

    This work applies a three-dimensional (3D) multiscale approach recently developed to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults. The approach couples the STOMP-CO2-R code for flow and reactive transport modeling to the ABAQUS ® finite element package for geomechanical analysis. The objective is to examine the coupled hydro-geochemical-mechanical impact on the risk of hydraulic fracture and fault slip in a complex and representative CO 2 reservoir that contains two nearly parallel faults. STOMP-CO2-R/ABAQUS ® coupled analyses of this reservoir are performed assuming extensional and compressional stress regimesmore » to predict evolutions of fluid pressure, stress and strain distributions as well as potential fault failure and leakage of CO 2 along the fault damage zones. The tendency for the faults to slip and pressure margin to fracture are examined in terms of stress regime, mineral composition, crack distributions in the fault damage zones and geomechanical properties. Here, this model in combination with a detailed description of the faults helps assess the coupled hydro-geochemical-mechanical effect.« less

  16. Inferring Fault Frictional and Reservoir Hydraulic Properties From Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Jagalur-Mohan, Jayanth; Jha, Birendra; Wang, Zheng; Juanes, Ruben; Marzouk, Youssef

    2018-02-01

    Characterizing the rheological properties of faults and the evolution of fault friction during seismic slip are fundamental problems in geology and seismology. Recent increases in the frequency of induced earthquakes have intensified the need for robust methods to estimate fault properties. Here we present a novel approach for estimation of aquifer and fault properties, which combines coupled multiphysics simulation of injection-induced seismicity with adaptive surrogate-based Bayesian inversion. In a synthetic 2-D model, we use aquifer pressure, ground displacements, and fault slip measurements during fluid injection to estimate the dynamic fault friction, the critical slip distance, and the aquifer permeability. Our forward model allows us to observe nonmonotonic evolutions of shear traction and slip on the fault resulting from the interplay of several physical mechanisms, including injection-induced aquifer expansion, stress transfer along the fault, and slip-induced stress relaxation. This interplay provides the basis for a successful joint inversion of induced seismicity, yielding well-informed Bayesian posterior distributions of dynamic friction and critical slip. We uncover an inverse relationship between dynamic friction and critical slip distance, which is in agreement with the small dynamic friction and large critical slip reported during seismicity on mature faults.

  17. Analysis of a Complex Faulted CO 2 Reservoir Using a Three-dimensional Hydro-geochemical-Mechanical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Hou, Zhangshuan; Bacon, Diana H.

    This work applies a three-dimensional (3D) multiscale approach recently developed to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults. The approach couples the STOMP-CO2-R code for flow and reactive transport modeling to the ABAQUS ® finite element package for geomechanical analysis. The objective is to examine the coupled hydro-geochemical-mechanical impact on the risk of hydraulic fracture and fault slip in a complex and representative CO 2 reservoir that contains two nearly parallel faults. STOMP-CO2-R/ABAQUS ® coupled analyses of this reservoir are performed assuming extensional and compressional stress regimesmore » to predict evolutions of fluid pressure, stress and strain distributions as well as potential fault failure and leakage of CO 2 along the fault damage zones. The tendency for the faults to slip and pressure margin to fracture are examined in terms of stress regime, mineral composition, crack distributions in the fault damage zones and geomechanical properties. Here, this model in combination with a detailed description of the faults helps assess the coupled hydro-geochemical-mechanical effect.« less

  18. Fault connectivity, distributed shortening, and impacts on geologic- geodetic slip rate discrepancies in the central Mojave Desert, California

    NASA Astrophysics Data System (ADS)

    Selander, J.; Oskin, M. E.; Cooke, M. L.; Grette, K.

    2015-12-01

    Understanding off-fault deformation and distribution of displacement rates associated with disconnected strike-slip faults requires a three-dimensional view of fault geometries. We address problems associated with distributed faulting by studying the Mojave segment of the East California Shear Zone (ECSZ), a region dominated by northwest-directed dextral shear along disconnected northwest- southeast striking faults. We use a combination of cross-sectional interpretations, 3D Boundary Element Method (BEM) models, and slip-rate measurements to test new hypothesized fault connections. We find that reverse faulting acts as an important means of slip transfer between strike-slip faults, and show that the impacts of these structural connections on shortening, uplift, strike-slip rates, and off-fault deformation, help to reconcile the overall strain budget across this portion of the ECSZ. In detail, we focus on the Calico and Blackwater faults, which are hypothesized to together represent the longest linked fault system in the Mojave ECSZ, connected by a restraining step at 35°N. Across this restraining step the system displays a pronounced displacement gradient, where dextral offset decreases from ~11.5 to <2 km from south to north. Cross-section interpretations show that ~40% of this displacement is transferred from the Calico fault to the Harper Lake and Blackwater faults via a set of north-dipping thrust ramps. Late Quaternary dextral slip rates follow a similar pattern, where 1.4 +0.8/-0.4 mm/yr of slip along the Calico fault south of 35°N is distributed to the Harper Lake, Blackwater, and Tin Can Alley faults. BEM model results using revised fault geometries for the Mojave ECSZ show areas of uplift consistent with contractional structures, and fault slip-rates that more closely match geologic data. Overall, revised fault connections and addition of off-fault deformation greatly reduces the discrepancy between geodetic and geologic slip rates.

  19. Earthquake nucleation in a stochastic fault model of globally coupled units with interaction delays

    NASA Astrophysics Data System (ADS)

    Vasović, Nebojša; Kostić, Srđan; Franović, Igor; Todorović, Kristina

    2016-09-01

    In present paper we analyze dynamics of fault motion by considering delayed interaction of 100 all-to-all coupled blocks with rate-dependent friction law in presence of random seismic noise. Such a model sufficiently well describes a real fault motion, whose prevailing stochastic nature is implied by surrogate data analysis of available GPS measurements of active fault movement. Interaction of blocks in an analyzed model is studied as a function of time delay, observed both for dynamics of individual faults and phenomenological models. Analyzed model is examined as a system of all-to-all coupled blocks according to typical assumption of compound faults as complex of globally coupled segments. We apply numerical methods to show that there are local bifurcations from equilibrium state to periodic oscillations, with an occurrence of irregular aperiodic behavior when initial conditions are set away from the equilibrium point. Such a behavior indicates a possible existence of a bi-stable dynamical regime, due to effect of the introduced seismic noise or the existence of global attractor. The latter assumption is additionally confirmed by analyzing the corresponding mean-field approximated model. In this bi-stable regime, distribution of event magnitudes follows Gutenberg-Richter power law with satisfying statistical accuracy, including the b-value within the real observed range.

  20. An earthquake instability model based on faults containing high fluid-pressure compartments

    USGS Publications Warehouse

    Lockner, D.A.; Byerlee, J.D.

    1995-01-01

    It has been proposed that large strike-slip faults such as the San Andreas contain water in seal-bounded compartments. Arguments based on heat flow and stress orientation suggest that in most of the compartments, the water pressure is so high that the average shear strength of the fault is less than 20 MPa. We propose a variation of this basic model in which most of the shear stress on the fault is supported by a small number of compartments where the pore pressure is relatively low. As a result, the fault gouge in these compartments is compacted and lithified and has a high undisturbed strength. When one of these locked regions fails, the system made up of the neighboring high and low pressure compartments can become unstable. Material in the high fluid pressure compartments is initially underconsolidated since the low effective confining pressure has retarded compaction. As these compartments are deformed, fluid pressure remains nearly unchanged so that they offer little resistance to shear. The low pore pressure compartments, however, are overconsolidated and dilate as they are sheared. Decompression of the pore fluid in these compartments lowers fluid pressure, increasing effective normal stress and shear strength. While this effect tends to stabilize the fault, it can be shown that this dilatancy hardening can be more than offset by displacement weakening of the fault (i.e., the drop from peak to residual strength). If the surrounding rock mass is sufficiently compliant to produce an instability, slip will propagate along the fault until the shear fracture runs into a low-stress region. Frictional heating and the accompanying increase in fluid pressure that are suggested to occur during shearing of the fault zone will act as additional destabilizers. However, significant heating occurs only after a finite amount of slip and therefore is more likely to contribute to the energetics of rupture propagation than to the initiation of the instability. We present results of a one-dimensional dynamic Burridge-Knopoff-type model to demonstrate various aspects of the fluid-assisted fault instability described above. In the numerical model, the fault is represented by a series of blocks and springs, with fault rheology expressed by static and dynamic friction. In addition, the fault surface of each block has associated with it pore pressure, porosity and permeability. All of these variables are allowed to evolve with time, resulting in a wide range of phenomena related to fluid diffusion, dilatancy, compaction and heating. These phenomena include creep events, diffusion-controlled precursors, triggered earthquakes, foreshocks, aftershocks, and multiple earthquakes. While the simulations have limitations inherent to 1-D fault models, they demonstrate that the fluid compartment model can, in principle, provide the rich assortment of phenomena that have been associated with earthquakes. ?? 1995 Birkha??user Verlag.

Top