Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Model-Based Diagnostics for Propellant Loading Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.
2011-01-01
The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2012-01-01
This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Modeling and Fault Simulation of Propellant Filling System
NASA Astrophysics Data System (ADS)
Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo
2012-05-01
Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.
2011-12-01
The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
Zeng, Yuehua; Shen, Zheng-Kang
2017-01-01
We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018 N·m/year">8.5×1018 N⋅m/year for California and the WUS outside California, respectively.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications
NASA Astrophysics Data System (ADS)
Nasir, Ali
Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and behavior of computed policies in response to the changes in design parameters. A primary case study is built from the Far Ultraviolet Spectroscopic Explorer (FUSE) mission for which component models and their probabilities of failure are based on realistic mission data. A comparison of our approach with an alternative framework for spacecraft task planning and fault management is presented in the context of the FUSE mission.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project
Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian
2012-01-01
Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang
2014-01-01
A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data
NASA Astrophysics Data System (ADS)
Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.
2011-12-01
The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.
Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-04-28
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-01-01
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925
NASA Astrophysics Data System (ADS)
Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.
2018-02-01
The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.
Fault tolerant control of multivariable processes using auto-tuning PID controller.
Yu, Ding-Li; Chang, T K; Yu, Ding-Wen
2005-02-01
Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
Testability analysis on a hydraulic system in a certain equipment based on simulation model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou
2018-03-01
Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
Towards a Fault-based SHA in the Southern Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Baize, Stéphane; Reicherter, Klaus; Thomas, Jessica; Chartier, Thomas; Cushing, Edward Marc
2016-04-01
A brief overview at a seismic map of the Upper Rhine Graben area (say between Strasbourg and Basel) reveals that the region is seismically active. The area has been hit recently by shallow and moderate quakes but, historically, strong quakes damaged and devastated populated zones. Several authors previously suggested, through preliminary geomorphological and geophysical studies, that active faults could be traced along the eastern margin of the graben. Thus, fault-based PSHA (probabilistic seismic hazard assessment) studies should be developed. Nevertheless, most of the input data in fault-based PSHA models are highly uncertain, based upon sparse or hypothetical data. Geophysical and geological data document the presence of post-Tertiary westward dipping faults in the area. However, our first investigations suggest that the available surface fault map do not provide a reliable document of Quaternary fault traces. Slip rate values that can be currently used in fault-PSHA models are based on regional stratigraphic data, but these include neither detailed datings nor clear base surface contours. Several hints on fault activity do exist and we have now relevant tools and techniques to figure out the activity of the faults of concern. Our preliminary analyses suggest that the LiDAR topography can adequately image the fault segments and, thanks to detailed geomorphological analysis, these data allow tracking cumulative fault offsets. Because the fault models can therefore be considered highly uncertain, our coming project for the next 3 years is to acquire and analyze these accurate topographical data, to trace the active faults and to determine slip rates through relevant features dating. Eventually, we plan to find a key site to perform a paleoseismological trench because this approach has been proved to be worth in the Graben, both to the North (Wörms and Strasbourg) and to the South (Basel). This would be done in order to definitely prove whether the faults ruptured the ground surface during the Quaternary, and in order to determine key fault parameters such as magnitude and age of large events.
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
NASA Astrophysics Data System (ADS)
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
Fault detection of Tennessee Eastman process based on topological features and SVM
NASA Astrophysics Data System (ADS)
Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen
2018-03-01
Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.
NASA Astrophysics Data System (ADS)
Schlechtingen, Meik; Ferreira Santos, Ilmar
2011-07-01
This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal behavior model is compared to two artificial neural network based approaches, which are a full signal reconstruction and an autoregressive normal behavior model. Based on a real time series containing two generator bearing damages the capabilities of identifying the incipient fault prior to the actual failure are investigated. The period after the first bearing damage is used to develop the three normal behavior models. The developed or trained models are used to investigate how the second damage manifests in the prediction error. Furthermore the full signal reconstruction and the autoregressive approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies. The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time after first indication of damage. The general nonlinear neural network approaches outperform the regression model. The remaining seasonality in the regression model prediction error makes it difficult to detect abnormality and leads to increased alarm levels and thus a shorter remaining operational period. For the bearing damages and the stator anomalies under investigation the full signal reconstruction neural network gave the best fault visibility and thus led to the highest confidence level.
Gas Path On-line Fault Diagnostics Using a Nonlinear Integrated Model for Gas Turbine Engines
NASA Astrophysics Data System (ADS)
Lu, Feng; Huang, Jin-quan; Ji, Chun-sheng; Zhang, Dong-dong; Jiao, Hua-bin
2014-08-01
Gas turbine engine gas path fault diagnosis is closely related technology that assists operators in managing the engine units. However, the performance gradual degradation is inevitable due to the usage, and it result in the model mismatch and then misdiagnosis by the popular model-based approach. In this paper, an on-line integrated architecture based on nonlinear model is developed for gas turbine engine anomaly detection and fault diagnosis over the course of the engine's life. These two engine models have different performance parameter update rate. One is the nonlinear real-time adaptive performance model with the spherical square-root unscented Kalman filter (SSR-UKF) producing performance estimates, and the other is a nonlinear baseline model for the measurement estimates. The fault detection and diagnosis logic is designed to discriminate sensor fault and component fault. This integration architecture is not only aware of long-term engine health degradation but also effective to detect gas path performance anomaly shifts while the engine continues to degrade. Compared to the existing architecture, the proposed approach has its benefit investigated in the experiment and analysis.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
Wali, Behram; Khattak, Asad J; Xu, Jingjing
2018-01-01
The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.
A multiple fault rupture model of the November 13 2016, M 7.8 Kaikoura earthquake, New Zealand
NASA Astrophysics Data System (ADS)
Benites, R. A.; Francois-Holden, C.; Langridge, R. M.; Kaneko, Y.; Fry, B.; Kaiser, A. E.; Caldwell, T. G.
2017-12-01
The rupture-history of the November 13 2016 MW7.8 Kaikoura earthquake recorded by near- and intermediate-field strong-motion seismometers and 2 high-rate GPS stations reveals a complex cascade of multiple crustal fault rupture. In spite of such complexity, we show that the rupture history of each fault is well approximated by simple kinematic model with uniform slip and rupture velocity. Using 9 faults embedded in a crustal layer 19 km thick, each with a prescribed slip vector and rupture velocity, this model accurately reproduces the displacement waveforms recorded at the near-field strong-motion and GPS stations. This model includes the `Papatea Fault' with a mixed thrust and strike-slip mechanism based on in-situ geological observations with up to 8 m of uplift observed. Although the kinematic model fits the ground-motion at the nearest strong station, it doesn not reproduce the one sided nature of the static deformation field observed geodetically. This suggests a dislocation based approach does not completely capture the mechanical response of the Papatea Fault. The fault system as a whole extends for approximately 150 km along the eastern side of the Marlborough fault system in the South Island of New Zealand. The total duration of the rupture was 74 seconds. The timing and location of each fault's rupture suggests fault interaction and triggering resulting in a northward cascade crustal ruptures. Our model does not require rupture of the underlying subduction interface to explain the data.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-En; Huang, Wen-Jeng; Chang, Ping-Yu; Lo, Wei
2016-04-01
An unmanned aerial vehicle (UAV) with a digital camera is an efficient tool for geologists to investigate structure patterns in the field. By setting ground control points (GCPs), UAV-based photogrammetry provides high-quality and quantitative results such as a digital surface model (DSM) and orthomosaic and elevational images. We combine the elevational outcrop 3D model and a digital surface model together to analyze the structural characteristics of Sanyi active fault in Houli-Fengyuan area, western Taiwan. Furthermore, we collect resistivity survey profiles and drilling core data in the Fengyuan District in order to build the subsurface fault geometry. The ground sample distance (GSD) of an elevational outcrop 3D model is 3.64 cm/pixel in this study. Our preliminary result shows that 5 fault branches are distributed 500 meters wide on the elevational outcrop and the width of Sanyi fault zone is likely much great than this value. Together with our field observations, we propose a structural evolution model to demonstrate how the 5 fault branches developed. The resistivity survey profiles show that Holocene gravel was disturbed by the Sanyi fault in Fengyuan area.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Dynamic modeling of gearbox faults: A review
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng
2018-01-01
Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.
A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage
NASA Astrophysics Data System (ADS)
Watson, F. E.; Doster, F.
2017-12-01
In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
Ontology-Based Method for Fault Diagnosis of Loaders.
Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei
2018-02-28
This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.
Ontology-Based Method for Fault Diagnosis of Loaders
Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei
2018-01-01
This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study. PMID:29495646
Salton Trough Post-seismic Afterslip, Viscoelastic Response, and Contribution to Regional Hazard
NASA Astrophysics Data System (ADS)
Parker, J. W.; Donnellan, A.; Lyzenga, G. A.
2012-12-01
The El Mayor-Cucapah M7.2 April 4 2010 earthquake in Baja California may have affected accumulated hazard to Southern California cities due to loading of regional faults including the Elsinore, San Jacinto and southern San Andreas, faults which already have over a century of tectonic loading. We examine changes observed via multiple seismic and geodetic techniques, including micro seismicity and proposed seismicity-based indicators of hazard, high-quality fault models, the Plate Boundary Observatory GNSS array (with 174 stations showing post-seismic transients with greater than 1 mm amplitude), and interferometric radar maps from UAVSAR (aircraft) flights, showing a network of aseismic fault slip events at distances up to 60 km from the end of the surface rupture. Finite element modeling is used to compute the expected coseismic motions at GPS stations with general agreement, including coseismic uplift at sites ~200 km north of the rupture. Postseismic response is also compared, with GNSS and also with the CIG software "RELAX." An initial examination of hazard is made comparing micro seismicity-based metrics, fault models, and changes to coulomb stress on nearby faults using the finite element model. Comparison of seismicity with interferograms and historic earthquakes show aseismic slip occurs on fault segments that have had earthquakes in the last 70 years, while other segments show no slip at the surface but do show high triggered seismicity. UAVSAR-based estimates of fault slip can be incorporated into the finite element model to correct Coloumb stress change.
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Moschas, Fanis; Stiros, Stathis
2017-04-01
Finite fault models (FFM) are presented for the two main shocks of the 2014 Cephalonia (Ionian Sea, Greece) seismic sequence (M 6.0) which produced extreme peak ground accelerations ( 0.7g) in the west edge of the Aegean Arc, an area in which the poor coverage by seismological and GPS/INSAR data makes FFM a real challenge. Modeling was based on co-seismic GPS data and on the recently introduced TOPological INVersion algorithm. The latter is a novel uniform grid search-based technique in n-dimensional spaces, is based on the concept of stochastic variables and which can identify multiple unconstrained ("free") solutions in a specified search space. Derived FFMs for the 2014 earthquakes correspond to an essentially strike slip fault and of part of a shallow thrust, the surface projection of both of which run roughly along the west coast of Cephalonia. Both faults correlate with pre-existing faults. The 2014 faults, in combination with the faults of the 2003 and 2015 Leucas earthquakes farther NE, form a string of oblique slip, partly overlapping fault segments with variable geometric and kinematic characteristics along the NW edge of the Aegean Arc. This composite fault, usually regarded as the Cephalonia Transform Fault, accommodates shear along this part of the Arc. Because of the highly fragmented crust, dominated by major thrusts in this area, fault activity is associated with 20km long segments and magnitude 6.0-6.5 earthquakes recurring in intervals of a few seconds to 10 years.
NASA Astrophysics Data System (ADS)
Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.
2011-12-01
The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four younger Quaternary unconformities across portions of these faults to test whether the post- ~1.8 Ma deformation continues into late Quaternary. This will provide critical information for a meaningful assessment of the seismic hazards facing Newport beach through metropolitan San Diego.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, Marion Y.; Bhat, Harsha S.
2018-05-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, M. Y.; Bhat, H. S.
2017-12-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
NASA Astrophysics Data System (ADS)
Stevens, Victoria
2017-04-01
The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters
NASA Astrophysics Data System (ADS)
Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen
2016-12-01
This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.
NASA Astrophysics Data System (ADS)
Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.
2017-12-01
Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W.
2013-01-01
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully. PMID:24253191
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W
2013-11-18
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.
NASA Astrophysics Data System (ADS)
van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.
2018-05-01
Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
An architecture for the development of real-time fault diagnosis systems using model-based reasoning
NASA Technical Reports Server (NTRS)
Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday
1992-01-01
Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.
Fault detection and diagnosis of photovoltaic systems
NASA Astrophysics Data System (ADS)
Wu, Xing
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.
Product quality management based on CNC machine fault prognostics and diagnosis
NASA Astrophysics Data System (ADS)
Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.
2018-03-01
This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.
A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration
NASA Technical Reports Server (NTRS)
Obando, Rodrigo A.; Stoughton, John W.
1995-01-01
The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.
Achieving Agreement in Three Rounds With Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon
2009-01-01
Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
An Event-Based Approach to Distributed Diagnosis of Continuous Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon
2010-01-01
Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Deformation induced microtwins and stacking faults in aluminum single crystal.
Han, W Z; Cheng, G M; Li, S X; Wu, S D; Zhang, Z F
2008-09-12
Microtwins and stacking faults in plastically deformed aluminum single crystal were successfully observed by high-resolution transmission electron microscope. The occurrence of these microtwins and stacking faults is directly related to the specially designed crystallographic orientation, because they were not observed in pure aluminum single crystal or polycrystal before. Based on the new finding above, we propose a universal dislocation-based model to judge the preference or not for the nucleation of deformation twins and stacking faults in various face-centered-cubic metals in terms of the critical stress for dislocation glide or twinning by considering the intrinsic factors, such as stacking fault energy, crystallographic orientation, and grain size. The new finding of deformation induced microtwins and stacking faults in aluminum single crystal and the proposed model should be of interest to a broad community.
Research on criticality analysis method of CNC machine tools components under fault rate correlation
NASA Astrophysics Data System (ADS)
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
A benchmark for fault tolerant flight control evaluation
NASA Astrophysics Data System (ADS)
Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.
2013-12-01
A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.
NASA Astrophysics Data System (ADS)
Bhagat, Satish; Wijeyewickrema, Anil C.
2017-04-01
This paper reports on an investigation of the seismic response of base-isolated reinforced concrete buildings, which considers various isolation system parameters under bidirectional near-fault and far-fault motions. Three-dimensional models of 4-, 8-, and 12-story base-isolated buildings with nonlinear effects in the isolation system and the superstructure are investigated, and nonlinear response history analysis is carried out. The bounding values of isolation system properties that incorporate the aging effect of isolators are also taken into account, as is the current state of practice in the design and analysis of base-isolated buildings. The response indicators of the buildings are studied for near-fault and far-fault motions weight-scaled to represent the design earthquake (DE) level and the risk-targeted maximum considered earthquake (MCER) level. Results of the nonlinear response history analyses indicate no structural damage under DE-level motions for near-fault and far-fault motions and for MCER-level far-fault motions, whereas minor structural damage is observed under MCER-level near-fault motions. Results of the base-isolated buildings are compared with their fixed-base counterparts. Significant reduction of the superstructure response of the 12-story base-isolated building compared to the fixed-base condition indicates that base isolation can be effectively used in taller buildings to enhance performance. Additionally, the applicability of a rigid superstructure to predict the isolator displacement demand is also investigated. It is found that the isolator displacements can be estimated accurately using a rigid body model for the superstructure for the buildings considered.
Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system
NASA Technical Reports Server (NTRS)
1974-01-01
A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.
Model-Based Diagnosis and Prognosis of a Water Recycling System
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Hafiychuk, Vasyl; Goebel, Kai Frank
2013-01-01
A water recycling system (WRS) deployed at NASA Ames Research Center s Sustainability Base (an energy efficient office building that integrates some novel technologies developed for space applications) will serve as a testbed for long duration testing of next generation spacecraft water recycling systems for future human spaceflight missions. This system cleans graywater (waste water collected from sinks and showers) and recycles it into clean water. Like all engineered systems, the WRS is prone to standard degradation due to regular use, as well as other faults. Diagnostic and prognostic applications will be deployed on the WRS to ensure its safe, efficient, and correct operation. The diagnostic and prognostic results can be used to enable condition-based maintenance to avoid unplanned outages, and perhaps extend the useful life of the WRS. Diagnosis involves detecting when a fault occurs, isolating the root cause of the fault, and identifying the extent of damage. Prognosis involves predicting when the system will reach its end of life irrespective of whether an abnormal condition is present or not. In this paper, first, we develop a physics model of both nominal and faulty system behavior of the WRS. Then, we apply an integrated model-based diagnosis and prognosis framework to the simulation model of the WRS for several different fault scenarios to detect, isolate, and identify faults, and predict the end of life in each fault scenario, and present the experimental results.
Fuzzy model-based observers for fault detection in CSTR.
Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan
2015-11-01
Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Achieving Agreement in Three Rounds with Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar, R.
2017-01-01
A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.
Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng
2016-12-08
This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.
Fault feature analysis of cracked gear based on LOD and analytical-FE method
NASA Astrophysics Data System (ADS)
Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng
2018-01-01
At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.
Growth trishear model and its application to the Gilbertown graben system, southwest Alabama
Jin, G.; Groshong, R.H.; Pashin, J.C.
2009-01-01
Fault-propagation folding associated with an upward propagating fault in the Gilbertown graben system is revealed by well-based 3-D subsurface mapping and dipmeter analysis. The fold is developed in the Selma chalk, which is an oil reservoir along the southern margin of the graben. Area-depth-strain analysis suggests that the Cretaceous strata were growth units, the Jurassic strata were pregrowth units, and the graben system is detached in the Louann Salt. The growth trishear model has been applied in this paper to study the evolution and kinematics of extensional fault-propagation folding. Models indicate that the propagation to slip (p/s) ratio of the underlying fault plays an important role in governing the geometry of the resulting extensional fault-propagation fold. With a greater p/s ratio, the fold is more localized in the vicinity of the propagating fault. The extensional fault-propagation fold in the Gilbertown graben is modeled by both a compactional and a non-compactional growth trishear model. Both models predict a similar geometry of the extensional fault-propagation fold. The trishear model with compaction best predicts the fold geometry. ?? 2008 Elsevier Ltd. All rights reserved.
Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun
2017-07-28
Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.
Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang
2017-01-01
Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster–Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions. PMID:28788099
An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems
NASA Astrophysics Data System (ADS)
Hieb, Jeffrey; Graham, James; Guan, Jian
This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.
PV Systems Reliability Final Technical Report: Ground Fault Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrova, Olga; Flicker, Jack David; Johnson, Jay
We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.
Model Based Inference for Wire Chafe Diagnostics
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Wheeler, Kevin R.; Timucin, Dogan A.; Wysocki, Philip F.; Kowalski, Marc Edward
2009-01-01
Presentation for Aging Aircraft conference covering chafing fault diagnostics using Time Domain Reflectometry. Laboratory setup and experimental methods are presented, along with initial results that summarize fault modeling and detection capabilities.
Source characterization and dynamic fault modeling of induced seismicity
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Young, R. P.
2017-12-01
In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.
NASA Astrophysics Data System (ADS)
Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea
2016-05-01
Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
NASA Astrophysics Data System (ADS)
Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel
2016-12-01
This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.
A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.
2010-01-01
Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.
Sensor fault diagnosis of aero-engine based on divided flight status.
Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu
2017-11-01
Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.
Sensor fault diagnosis of aero-engine based on divided flight status
NASA Astrophysics Data System (ADS)
Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu
2017-11-01
Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.
A New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yu; Guo, Jianqiu; Goue, Ouloide
Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verifiedmore » this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.« less
Pseudo-dynamic source characterization accounting for rough-fault effects
NASA Astrophysics Data System (ADS)
Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin
2016-04-01
Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.
NASA Astrophysics Data System (ADS)
Akiyama, S.; Kawaji, K.; Fujihara, S.
2013-12-01
Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.
NASA Technical Reports Server (NTRS)
Bird, P.; Baumgardner, J.
1984-01-01
To determine the correct fault rheology of the Transverse Ranges area of California, a new finite element to represent faults and a mangle drag element are introduced into a set of 63 simulation models of anelastic crustal strain. It is shown that a slip rate weakening rheology for faults is not valid in California. Assuming that mantle drag effects on the crust's base are minimal, the optimal coefficient of friction in the seismogenic portion of the fault zones is 0.4-0.6 (less than Byerly's law assumed to apply elsewhere). Depending on how the southern California upper mantle seismic velocity anomaly is interpreted, model results are improved or degraded. It is found that the location of the mantle plate boundary is the most important secondary parameter, and that the best model is either a low-stress model (fault friction = 0.3) or a high-stress model (fault friction = 0.85), each of which has strong mantel drag. It is concluded that at least the fastest moving faults in southern California have a low friction coefficient (approximtely 0.3) because they contain low strength hydrated clay gouges throughout the low-temperature seismogenic zone.
Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.
2006-12-01
More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.
Eastern Denali Fault surface trace map, eastern Alaska and Yukon, Canada
Bender, Adrian M.; Haeussler, Peter J.
2017-05-04
We map the 385-kilometer (km) long surface trace of the right-lateral, strike-slip Denali Fault between the Totschunda-Denali Fault intersection in Alaska, United States and the village of Haines Junction, Yukon, Canada. In Alaska, digital elevation models based on light detection and ranging and interferometric synthetic aperture radar data enabled our fault mapping at scales of 1:2,000 and 1:10,000, respectively. Lacking such resources in Yukon, we developed new structure-from-motion digital photogrammetry products from legacy aerial photos to map the fault surface trace at a scale of 1:10,000 east of the international border. The section of the fault that we map, referred to as the Eastern Denali Fault, did not rupture during the 2002 Denali Fault earthquake (moment magnitude 7.9). Seismologic, geodetic, and geomorphic evidence, along with a paleoseismic record of past ground-rupturing earthquakes, demonstrate Holocene and contemporary activity on the fault, however. This map of the Eastern Denali Fault surface trace complements other data sets by providing an openly accessible digital interpretation of the location, length, and continuity of the fault’s surface trace based on the accompanying digital topography dataset. Additionally, the digitized fault trace may provide geometric constraints useful for modeling earthquake scenarios and related seismic hazard.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
NASA Astrophysics Data System (ADS)
Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.
2004-12-01
The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism
Yang, Shuqiang; Zhu, Xiaoqian; Jin, Songchang; Wang, Xiang
2014-01-01
The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM) to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved. PMID:25215324
Tembe, S.; Lockner, D.; Wong, T.-F.
2009-01-01
Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (?? 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature-and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (?????0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress. Copyright 2009 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.; Lin, J. C.
2015-12-01
Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).
Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan
2012-01-01
Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).
On boundary-element models of elastic fault interaction
NASA Astrophysics Data System (ADS)
Becker, T. W.; Schott, B.
2002-12-01
We present the freely available, modular, and UNIX command-line based boundary-element program interact. It is yet another implementation of Crouch and Starfield's (1983) 2-D and Okada's (1992) half-space solutions for constant slip on planar fault segments in an elastic medium. Using unconstrained or non-negative, standard-package matrix routines, the code can solve for slip distributions on faults given stress boundary conditions, or vice versa, both in a local or global reference frame. Based on examples of complex fault geometries from structural geology, we discuss the effects of different stress boundary conditions on the predicted slip distributions of interacting fault systems. Such one-step calculations can be useful to estimate the moment-release efficiency of alternative fault geometries, and so to evaluate the likelihood which system may be realized in nature. A further application of the program is the simulation of cyclic fault rupture based on simple static-kinetic friction laws. We comment on two issues: First, that of the appropriate rupture algorithm. Cellular models of seismicity often employ an exhaustive rupture scheme: fault cells fail if some critical stress is reached, then cells slip once-only by a given amount, and subsequently the redistributed stress is used to check for triggered activations on other cells. We show that this procedure can lead to artificial complexity in seismicity if time-to-failure is not calculated carefully because of numerical noise. Second, we address the question if foreshocks can be viewed as direct expressions of a simple statistical distribution of frictional strength on individual faults. Repetitive failure models based on a random distribution of frictional coefficients initially show irregular seismicity. By repeatedly selecting weaker patches, the fault then evolves into a quasi-periodic cycle. Each time, the pre-mainshock events build up the cumulative moment release in a non-linear fashion. These temporal seismicity patterns roughly resemble the accelerated moment-release features which are sometimes observed in nature.
Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.
2016-01-01
Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.
NASA Astrophysics Data System (ADS)
Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.
2017-12-01
The 600 m-thick, strike slip Gole Larghe Fault Zone (GLFZ) experienced several hundred seismic slip events at c. 8 km depth, well-documented by numerous pseudotachylytes, was then exhumed and is now exposed in beautiful and very continuous outcrops. The fault zone was also characterized by hydrous fluid flow during the seismic cycle, demonstrated by alteration halos and precipitation of hydrothermal minerals in veins and cataclasites. We have characterized the GLFZ with > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed us obtaining 3D Discrete Fracture Network (DFN) models, based on robust probability density functions for parameters of fault and fracture sets, and simulating the fault zone hydraulic properties. In addition, the correlation between evidences of fluid flow and the fault/fracture network parameters have been studied with a geostatistical approach, allowing generating more realistic time-varying permeability models of the fault zone. Based on this dataset, we have developed a FEM hydraulic model of the GLFZ for a period of some tens of years, covering one seismic event and a postseismic period. The higher permeability is attained in the syn- to early post-seismic period, when fractures are (re)opened by off-fault deformation, then permeability decreases in the postseismic due to fracture sealing. The flow model yields a flow pattern consistent with the observed alteration/mineralization pattern and a marked channelling of fluid flow in the inner part of the fault zone, due to permeability anisotropy related to the spatial arrangement of different fracture sets. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, and the heterogeneity and evolution of mechanical parameters due to fluid-rock interaction.
Upper-crustal structure of the inner Continental Borderland near Long Beach, California
Baher, S.; Fuis, G.; Sliter, R.; Normark, W.R.
2005-01-01
A new P-wave velocity/structural model for the inner Continental Borderland (ICB) region was developed for the area near Long Beach, California. It combines controlled-source seismic reflection and refraction data collected during the 1994 Los Angeles Region Seismic Experiment (LARSE), multichannel seismic reflection data collected by the U.S. Geological Survey (1998-2000), and nearshore borehole stratigraphy. Based on lateral velocity contrasts and stratigraphic variation determined from borehole data, we are able to locate major faults such as the Cabrillo, Palos Verdes, THUMS-Huntington Beach, and Newport Inglewood fault zones, along with minor faults such as the slope fault, Avalon knoll, and several other yet unnamed faults. Catalog seismicity (1975-2002) plotted on our preferred velocity/structural model shows recent seismicity is located on 16 out of our 24 faults, providing evidence for continuing concern with respect to the existing seismic-hazard estimates. Forward modeling of P-wave arrival times on the LARSE line 1 resulted in a four-layer model that better resolves the stratigraphy and geologic structures of the ICB and also provides tighter constraints on the upper-crustal velocity structure than previous modeling of the LARSE data. There is a correlation between the structural horizons identified in the reflection data with the velocity interfaces determined from forward modeling of refraction data. The strongest correlation is between the base of velocity layer 1 of the refraction model and the base of the planar sediment beneath the shelf and slope determined by the reflection model. Layers 2 and 3 of the velocity model loosely correlate with the diffractive crust layer, locally interpreted as Catalina Schist.
A signal-based fault detection and classification method for heavy haul wagons
NASA Astrophysics Data System (ADS)
Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan
2017-12-01
This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.
Kinematics of the New Madrid seismic zone, central United States, based on stepover models
Pratt, Thomas L.
2012-01-01
Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.
NASA Astrophysics Data System (ADS)
Marchandon, Mathilde; Vergnolle, Mathilde; Sudhaus, Henriette; Cavalié, Olivier
2018-02-01
In this study, we reestimate the source model of the 1997 Mw 7.2 Zirkuh earthquake (northeastern Iran) by jointly optimizing intermediate-field Interferometry Synthetic Aperture Radar data and near-field optical correlation data using a two-step fault modeling procedure. First, we estimate the geometry of the multisegmented Abiz fault using a genetic algorithm. Then, we discretize the fault segments into subfaults and invert the data to image the slip distribution on the fault. Our joint-data model, although similar to the Interferometry Synthetic Aperture Radar-based model to the first order, highlights differences in the fault dip and slip distribution. Our preferred model is ˜80° west dipping in the northern part of the fault, ˜75° east dipping in the southern part and shows three disconnected high slip zones separated by low slip zones. The low slip zones are located where the Abiz fault shows geometric complexities and where the aftershocks are located. We interpret this rough slip distribution as three asperities separated by geometrical barriers that impede the rupture propagation. Finally, no shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
NASA Astrophysics Data System (ADS)
Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin
2016-07-01
Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.
Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment
NASA Astrophysics Data System (ADS)
Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu
The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.
NASA Astrophysics Data System (ADS)
Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.
2013-12-01
Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taniguchi, Chisato; Ichimura, Aiko; Ohtani, Noboru, E-mail: ohtani.noboru@kwansei.ac.jp
The formation of basal plane stacking faults in heavily nitrogen-doped 4H-SiC crystals was theoretically investigated. A novel theoretical model based on the so-called quantum well action mechanism was proposed; the model considers several factors, which were overlooked in a previously proposed model, and provides a detailed explanation of the annealing-induced formation of double layer Shockley-type stacking faults in heavily nitrogen-doped 4H-SiC crystals. We further revised the model to consider the carrier distribution in the depletion regions adjacent to the stacking fault and successfully explained the shrinkage of stacking faults during annealing at even higher temperatures. The model also succeeded inmore » accounting for the aluminum co-doping effect in heavily nitrogen-doped 4H-SiC crystals, in that the stacking fault formation is suppressed when aluminum acceptors are co-doped in the crystals.« less
NASA Astrophysics Data System (ADS)
Pinzuti, P.; Mignan, A.; King, G. C.
2009-12-01
Mechanical stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localized magma injection, with normal faults accommodating extension and subsidence above the maximum reach of the magma column. In these magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Using mechanical and kinematics concepts and vertical profiles of normal fault scarps from an Asal Rift campaign, where normal faults are sub-vertical on surface level, we discuss the creation and evolution of normal faults in massive fractured rocks (basalt). We suggest that the observed fault scarps correspond to sub-vertical en echelon structures and that at greater depth, these scarps combine and give birth to dipping normal faults. Finally, the geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
Lockner, David A.; Tembe, Cheryl; Wong, Teng-fong
2009-01-01
Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (m < 0.2) or strength consistent with standard laboratory tests (m > 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature- and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (m0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress.
Vibration signal models for fault diagnosis of planet bearings
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.
2016-05-01
Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
NASA Astrophysics Data System (ADS)
Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.
2010-10-01
Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
NASA Technical Reports Server (NTRS)
Harper, Richard
1989-01-01
In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.
Aspects of modelling the tectonics of large volcanoes on the terrestrial planets
NASA Technical Reports Server (NTRS)
Mcgovern, Patrick J.; Solomon, Sean C.
1993-01-01
Analytic solutions for the response of planetary lithospheres to volcanic loads have been used to model faulting and infer elastic plate thicknesses. Predictions of the distribution of faulting around volcanic loads, based on the application of Anderson's criteria for faulting to the results of the models, do not agree well with observations. Such models do not give the stress state and stress history within the edifice. The effects of episodic load growth can also be treated. When these effects are included, models give much better agreement with observations.
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
TWT transmitter fault prediction based on ANFIS
NASA Astrophysics Data System (ADS)
Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen
2017-11-01
Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.
Development of Hydrologic Characterization Technology of Fault Zones -- Phase I, 2nd Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karasaki, Kenzi; Onishi, Tiemi; Black, Bill
2009-03-31
This is the year-end report of the 2nd year of the NUMO-LBNL collaborative project: Development of Hydrologic Characterization Technology of Fault Zones under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix 3. Literature survey of published information on the relationship between geologic and hydrologic characteristics of faults was conducted. The survey concluded that it may be possible to classify faults by indicators based on various geometric and geologic attributes that may indirectly relate to the hydrologic property of faults. Analysis of existing information on the Wildcat Fault and its surrounding geology was performed. Themore » Wildcat Fault is thought to be a strike-slip fault with a thrust component that runs along the eastern boundary of the Lawrence Berkeley National Laboratory. It is believed to be part of the Hayward Fault system but is considered inactive. Three trenches were excavated at carefully selected locations mainly based on the information from the past investigative work inside the LBNL property. At least one fault was encountered in all three trenches. Detailed trench mapping was conducted by CRIEPI (Central Research Institute for Electric Power Industries) and LBNL scientists. Some intriguing and puzzling discoveries were made that may contradict with the published work in the past. Predictions are made regarding the hydrologic property of the Wildcat Fault based on the analysis of fault structure. Preliminary conceptual models of the Wildcat Fault were proposed. The Wildcat Fault appears to have multiple splays and some low angled faults may be part of the flower structure. In parallel, surface geophysical investigations were conducted using electrical resistivity survey and seismic reflection profiling along three lines on the north and south of the LBNL site. Because of the steep terrain, it was difficult to find optimum locations for survey lines as it is desirable for them to be as straight as possible. One interpretation suggests that the Wildcat Fault is westerly dipping. This could imply that the Wildcat Fault may merge with the Hayward Fault at depth. However, due to the complex geology of the Berkeley Hills, multiple interpretations of the geophysical surveys are possible. iv An effort to construct a 3D GIS model is under way. The model will be used not so much for visualization of the existing data because only surface data are available thus far, but to conduct investigation of possible abutment relations of the buried formations offset by the fault. A 3D model would be useful to conduct 'what if' scenario testing to aid the selection of borehole drilling locations and configurations. Based on the information available thus far, a preliminary plan for borehole drilling is outlined. The basic strategy is to first drill boreholes on both sides of the fault without penetrating it. Borehole tests will be conducted in these boreholes to estimate the property of the fault. Possibly a slanted borehole will be drilled later to intersect the fault to confirm the findings from the boreholes that do not intersect the fault. Finally, the lessons learned from conducting the trenching and geophysical surveys are listed. It is believed that these lessons will be invaluable information for NUMO when it conducts preliminary investigations at yet-to-be selected candidate sites in Japan.« less
NASA Astrophysics Data System (ADS)
Arriola, David; Thielecke, Frank
2017-09-01
Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
NASA Astrophysics Data System (ADS)
Chen, Jian; Randall, Robert Bond; Peeters, Bart
2016-06-01
Artificial Neural Networks (ANNs) have the potential to solve the problem of automated diagnostics of piston slap faults, but the critical issue for the successful application of ANN is the training of the network by a large amount of data in various engine conditions (different speed/load conditions in normal condition, and with different locations/levels of faults). On the other hand, the latest simulation technology provides a useful alternative in that the effect of clearance changes may readily be explored without recourse to cutting metal, in order to create enough training data for the ANNs. In this paper, based on some existing simplified models of piston slap, an advanced multi-body dynamic simulation software was used to simulate piston slap faults with different speeds/loads and clearance conditions. Meanwhile, the simulation models were validated and updated by a series of experiments. Three-stage network systems are proposed to diagnose piston faults: fault detection, fault localisation and fault severity identification. Multi Layer Perceptron (MLP) networks were used in the detection stage and severity/prognosis stage and a Probabilistic Neural Network (PNN) was used to identify which cylinder has faults. Finally, it was demonstrated that the networks trained purely on simulated data can efficiently detect piston slap faults in real tests and identify the location and severity of the faults as well.
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
Liu, Chunbo; Pan, Feng; Li, Yun
2016-07-29
Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.
A fault injection experiment using the AIRLAB Diagnostic Emulation Facility
NASA Technical Reports Server (NTRS)
Baker, Robert; Mangum, Scott; Scheper, Charlotte
1988-01-01
The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.
Simulation Based Earthquake Forecasting with RSQSim
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.
2016-12-01
We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.
Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu
2018-01-24
Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.
NASA Astrophysics Data System (ADS)
Bezzeghoud, M.; Dimitro, D.; Ruegg, J. C.; Lammali, K.
1995-09-01
Since 1980, most of the papers published on the El Asnam earthquake concern the geological and seismological aspects of the fault zone. Only one paper, published by Ruegg et al. (1982), constrains the faulting mechanism with geodetic measurements. The purpose of this paper is to reexamine the faulting mechanism of the 1954 and 1980 events by modelling the associated vertical movements. For this purpose we used all available data, and particularly those of the levelling profiles along the Algiers-Oran railway that has been remeasured after each event. The comparison between 1905 and 1976 levelling data shows observed vertical displacement that could have been induced by the 1954 earthquake. On the basis of the 1954 and 1980 levelling data, we propose a possible model for the 1954 and 1980 fault systems. Our 1954 fault model is parallel to the 1980 main thrust fault, with an offset of 6 km towards the west. The 1980 dislocation model proposed in this study is based on a variable slip dislocation model and explains the observed surface break displacements given by Yielding et al. (1981). The Dewey (1991) and Avouac et al. (1992) models are compared with our dislocation model and discussed in this paper.
Eberhart-Phillips, D.; Michael, A.J.
1998-01-01
Three-dimensional Vp and Vp/Vs velocity models for the Loma Prieta region were developed from the inversion of local travel time data (21,925 P arrivals and 1,116 S arrivals) from earthquakes, refraction shots, and blasts recorded on 1700 stations from the Northern California Seismic Network and numerous portable seismograph deployments. The velocity and density models and microearthquake hypocenters reveal a complex structure that includes a San Andreas fault extending to the base of the seismogenic layer. A body with high Vp extends the length of the rupture and fills the 5 km wide volume between the Loma Prieta mainshock rupture and the San Andreas and Sargent faults. We suggest that this body controls both the pattern of background seismicity on the San Andreas and Sargent faults and the extent of rupture during the mainshock, thus explaining how the background seismicity outlined the along-strike and depth extent of the mainshock rupture on a different fault plane 5 km away. New aftershock focal mechanisms, based on three-dimensional ray tracing through the velocity model, support a heterogeneous postseismic stress field and can not resolve a uniform fault normal compression. The subvertical (or steeply dipping) San Andreas fault and the fault surfaces that ruptured in the 1989 Loma Prieta earthquake are both parts of the San Andreas fault zone and this section of the fault zone does not have a single type of characteristic event.
Software dependability in the Tandem GUARDIAN system
NASA Technical Reports Server (NTRS)
Lee, Inhwan; Iyer, Ravishankar K.
1995-01-01
Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.
NASA Astrophysics Data System (ADS)
Yang, H.; Moresi, L. N.
2017-12-01
The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient < 0.1) is requisite. To the first order, there is significant density difference between the Great Valley and the adjacent Mojave block. The Great Valley block is much colder and of larger density (>200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2015-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2014-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
What does fault tolerant Deep Learning need from MPI?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.
Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for amore » fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.« less
NASA Astrophysics Data System (ADS)
Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing
2016-12-01
The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
An online outlier identification and removal scheme for improving fault detection performance.
Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej
2014-05-01
Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
NASA Astrophysics Data System (ADS)
Sanny, Teuku A.
2017-07-01
The objective of this study is to determine boundary and how to know surrounding area between Lembang Fault and Cimandiri fault. For the detailed study we used three methodologies: (1). Surface deformation modeling by using Boundary Element method and (2) Controlled Source Audiomagneto Telluric (CSAMT). Based on the study by using surface deformation by using Boundary Element Methods (BEM), the direction Lembang fault has a dominant displacement in east direction. The eastward displacement at the nothern fault block is smaller than the eastward displacement at the southern fault block which indicates that each fault block move in left direction relative to each other. From this study we know that Lembang fault in this area has left lateral strike slip component. The western part of the Lembang fault move in west direction different from the eastern part that moves in east direction. Stress distribution map of Lembang fault shows difference between the eastern and western segments of Lembang fault. Displacement distribution map along x-direction and y-direction of Lembang fault shows a linement oriented in northeast-southwest direction right on Tangkuban Perahu Mountain. Displacement pattern of Cimandiri fault indicates that the Cimandiri fault is devided into two segment. Eastern segment has left lateral strike slip component while the western segment has right lateral strike slip component. Based on the displacement distribution map along y-direction, a linement oriented in northwest-southeast direction is observed at the western segment of the Cimandiri fault. The displacement along x-direction and y-direction between the Lembang and Cimandiri fault is nearly equal to zero indicating that the Lembang fault and Cimandiri Fault are not connected to each others. Based on refraction seismic tomography that we know the characteristic of Cimandiri fault as normal fault. Based on CSAMT method th e lembang fault is normal fault that different of dip which formed as graben structure.
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Vleminckx, Bart; Camelbeeck, Thierry
2016-04-01
The Lower Rhine Graben (LRG) is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults, yet probabilistic seismic hazard assessments of this region have hitherto been based on area-source models, in which the LRG is modeled as a single or a small number of seismotectonic zones with uniform seismicity. While fault-based PSHA has become common practice in more active regions of the world (e.g., California, Japan, New Zealand, Italy), knowledge of active faults has been lagging behind in other regions, due to incomplete tectonic inventory, low level of seismicity, lack of systematic fault parameterization, or a combination thereof. The past few years, efforts are increasingly being directed to the inclusion of fault sources in PSHA in these regions as well, in order to predict hazard on a more physically sound basis. In Europe, the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/) represented an important step forward in this regard. In the frame of this project, we previously compiled the first parameterized fault model for the LRG that can be applied in PSHA. We defined 15 fault sources based on major stepovers, bifurcations, gaps, and important changes in strike, dip direction or slip rate. Based on the available data, we were able to place reasonable bounds on the parameters required for time-independent PSHA: length, width, strike, dip, rake, slip rate, and maximum magnitude. With long-term slip rates remaining below 0.1 mm/yr, the LRG can be classified as a low-deformation-rate structure. Information on recurrence interval and elapsed time since the last major earthquake is lacking for most faults, impeding time-dependent PSHA. We consider different models to construct the magnitude-frequency distribution (MFD) of each fault: a slip-rate constrained form of the classical truncated Gutenberg-Richter MFD (Anderson & Luco, 1983) versus a characteristic MFD following Youngs & Coppersmith (1985). The summed Anderson & Luco fault MFDs show a remarkably good agreement with the MFD obtained from the historical and instrumental catalog for the entire LRG, whereas the summed Youngs & Coppersmith MFD clearly underpredicts low to moderate magnitudes, but yields higher occurrence rates for M > 6.3 than would be obtained by simple extrapolation of the catalog MFD. The moment rate implied by the Youngs & Coppersmith MFDs is about three times higher, but is still within the range allowed by current GPS uncertainties. Using the open-source hazard engine OpenQuake (http://openquake.org/), we compute hazard maps for return periods of 475, 2475, and 10,000 yr, and for spectral periods of 0 s (PGA) and 1 s. We explore the impact of various parameter choices, such as MFD model, GMPE distance metric, and inclusion of a background zone to account for lower magnitudes, and we also compare the results with hazard maps based on area-source models. References: Anderson, J. G., and J. E. Luco (1983), Consequences of slip rate constraints on earthquake occurrence relations, Bull. Seismol. Soc. Am., 73(2), 471-496. Youngs, R. R., and K. J. Coppersmith (1985), Implications of fault slip rates and earthquake recurrence models to probabilistic seismic hazard estimates, Bull. Seismol. Soc. Am., 75(4), 939-964.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei
2015-10-01
In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Kumamoto, T.; Fujita, M.
2005-12-01
The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
NASA Astrophysics Data System (ADS)
Lien, Tzuyi; Cheng, Ching-Chung; Hwang, Cheinway; Crossley, David
2014-09-01
We develop a new hydrology and gravimetry-based method to assess whether or not a local fault may be active. We take advantage of an existing superconducting gravimeter (SG) station and a comprehensive groundwater network in Hsinchu to apply the method to the Hsinchu Fault (HF) across the Hsinchu Science Park, whose industrial output accounts for 10% of Taiwan's gross domestic product. The HF is suspected to pose seismic hazards to the park, but its existence and structure are not clear. The a priori geometry of the HF is translated into boundary conditions imposed in the hydrodynamic model. By varying the fault's location, depth, and including a secondary wrench fault, we construct five hydrodynamic models to estimate groundwater variations, which are evaluated by comparing groundwater levels and SG observations. The results reveal that the HF contains a low hydraulic conductivity core and significantly impacts groundwater flows in the aquifers. Imposing the fault boundary conditions leads to about 63-77% reduction in the differences between modeled and observed values (both water level and gravity). The test with fault depth shows that the HF's most recent slip occurred in the beginning of Holocene, supplying a necessary (but not sufficient) condition that the HF is currently active. A portable SG can act as a virtual borehole well for model assessment at critical locations of a suspected active fault.
NASA Astrophysics Data System (ADS)
Zielke, O.; Arrowsmith, J.
2007-12-01
In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the regional and local structural context which can be determined in the field: Assuming a paleoseismologist measures the offset along a fault caused by an earthquake, our model can be used to determine the probability distribution of magnitudes which are capable of producing the observed offset, accounting for regional tectonic setting and observation location.
Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping
2014-09-01
This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection
NASA Astrophysics Data System (ADS)
Li, Gang; McDonald, Geoff L.; Zhao, Qing
2017-01-01
This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.
Dislocation model for aseismic fault slip in the transverse ranges of Southern California
NASA Technical Reports Server (NTRS)
Cheng, A.; Jackson, D. D.; Matsuura, M.
1985-01-01
Geodetic data at a plate boundary can reveal the pattern of subsurface displacements that accompany plate motion. These displacements are modelled as the sum of rigid block motion and the elastic effects of frictional interaction between blocks. The frictional interactions are represented by uniform dislocation on each of several rectangular fault patches. The block velocities and fault parameters are then estimated from geodetic data. Bayesian inversion procedure employs prior estimates based on geological and seismological data. The method is applied to the Transverse Ranges, using prior geological and seismological data and geodetic data from the USGS trilateration networks. Geodetic data imply a displacement rate of about 20 mm/yr across the San Andreas Fault, while the geologic estimates exceed 30 mm/yr. The prior model and the final estimates both imply about 10 mm/yr crustal shortening normal to the trend of the San Andreas Fault. Aseismic fault motion is a major contributor to plate motion. The geodetic data can help to identify faults that are suffering rapid stress accumulation; in the Transverse Ranges those faults are the San Andreas and the Santa Susana.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
NASA Technical Reports Server (NTRS)
Abbott, Kathy
1990-01-01
The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to examine pilot mental models of the aircraft subsystems and their use in diagnosis tasks. Future research plans include piloted simulation evaluation of the diagnosis decision aiding concepts and crew interface issues. Information is given in viewgraph form.
A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity
NASA Astrophysics Data System (ADS)
Healy, D.; Rizzo, R. E.
2015-12-01
Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.
On-line diagnosis of inter-turn short circuit fault for DC brushed motor.
Zhang, Jiayuan; Zhan, Wei; Ehsani, Mehrdad
2018-06-01
Extensive research effort has been made in fault diagnosis of motors and related components such as winding and ball bearing. In this paper, a new concept of inter-turn short circuit fault for DC brushed motors is proposed to include the short circuit ratio and short circuit resistance. A first-principle model is derived for motors with inter-turn short circuit fault. A statistical model based on Hidden Markov Model is developed for fault diagnosis purpose. This new method not only allows detection of motor winding short circuit fault, it can also provide estimation of the fault severity, as indicated by estimation of the short circuit ratio and the short circuit resistance. The estimated fault severity can be used for making appropriate decisions in response to the fault condition. The feasibility of the proposed methodology is studied for inter-turn short circuit of DC brushed motors using simulation in MATLAB/Simulink environment. In addition, it is shown that the proposed methodology is reliable with the presence of small random noise in the system parameters and measurement. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.; Osipov, Vyatcheslav V.; Timucin, Dogan A.; Uckun, Serdar
2009-01-01
We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed.
Current Sensor Fault Diagnosis Based on a Sliding Mode Observer for PMSM Driven Systems
Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; Huang, Yi-Shan; Zhao, Kai-Hui
2015-01-01
This paper proposes a current sensor fault detection method based on a sliding mode observer for the torque closed-loop control system of interior permanent magnet synchronous motors. First, a sliding mode observer based on the extended flux linkage is built to simplify the motor model, which effectively eliminates the phenomenon of salient poles and the dependence on the direct axis inductance parameter, and can also be used for real-time calculation of feedback torque. Then a sliding mode current observer is constructed in αβ coordinates to generate the fault residuals of the phase current sensors. The method can accurately identify abrupt gain faults and slow-variation offset faults in real time in faulty sensors, and the generated residuals of the designed fault detection system are not affected by the unknown input, the structure of the observer, and the theoretical derivation and the stability proof process are concise and simple. The RT-LAB real-time simulation is used to build a simulation model of the hardware in the loop. The simulation and experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:25970258
NASA Astrophysics Data System (ADS)
Hoprich, M.; Decker, K.; Grasemann, B.; Sokoutis, D.; Willingshofer, E.
2009-04-01
Former analog modeling on pull-apart basins dealt with different sidestep geometries, the symmetry and ratio between velocities of moving blocks, the ratio between ductile base and model thickness, the ratio between fault stepover and model thickness and their influence on basin evolution. In all these models the pull-apart basin is deformed over an even detachment. The Vienna basin, however, is considered a classical thin-skinned pull-apart with a rather peculiar basement structure. Deformation and basin evolution are believed to be limited to the brittle upper crust above the Alpine-Carpathian floor thrust. The latter is not a planar detachment surface, but has a ramp-shaped topography draping the underlying former passive continental margin. In order to estimate the effects of this special geometry, nine experiments were accomplished and the resulting structures were compared with the Vienna basin. The key parameters for the models (fault and basin geometry, detachment depth and topography) were inferred from a 3D GoCad model of the natural Vienna basin, which was compiled from seismic, wells and geological cross sections. The experiments were scaled 1:100.000 ("Ramberg-scaling" for brittle rheology) and built of quartz sand (300 µm grain size). An average depth of 6 km (6 cm) was calculated for the basal detachment, distances between the bounding strike-slip faults of 40 km (40 cm) and a finite length of the natural basin of 200 km were estimated (initial model length: 100 cm). The following parameters were changed through the experimental process: (1) syntectonic sedimentation; (2) the stepover angle between bounding strike slip faults and basal velocity discontinuity; (3) moving of one or both fault blocks (producing an asymmetrical or symmetrical basin); (4) inclination of the basal detachment surface by 5°; (6) installation of 2 and 3 ramp systems at the detachment; (7) simulation of a ductile detachment through a 0.4 cm thick PDMS layer at the basin floor. The surface of the model was photographed after each deformation increment through the experiment. Pictures of serial cross sections cut through the models in their final state every 4 cm were also taken and interpreted. The formation of en-echelon normal faults with relay ramps is observed in all models. These faults are arranged in an acute angle to the basin borders, according to a Riedel-geometry. In the case of an asymmetric basin they emerge within the non-moving fault block. Substantial differences between the models are the number, the distance and the angle of these Riedel faults, the length of the bounding strike-slip faults and the cross basin symmetry. A flat detachment produces straight fault traces, whereas inclined detachments (or inclined ramps) lead to "bending" of the normal faults, rollover and growth strata thickening towards the faults. Positions and the sizes of depocenters also vary, with depocenters preferably developing above ramp-flat-transitions. Depocenter thicknesses increase with ramp heights. A similar relation apparently exists in the natural Vienna basin, which shows ramp-like structures in the detachment just underneath large faults like the Steinberg normal fault and the associated depocenters. The 3-ramp-model also reveals segmentation of the basin above the lowermost ramp. The evolving structure is comparable to the Wiener Neustadt sub-basin in the southern part of the Vienna basin, which is underlain by a topographical high of the detachment. Cross sections through the ductile model show a strong disintergration into a horst-and-graben basin. The thin silicon putty base influences the overlying strata in a way that the basin - unlike the "dry" sand models - becomes very flat and shallow. The top view shows an irregular basin shape and no rhombohedral geometry, which characterises the Vienna basin. The ductile base also leads to a symmetrical distribution of deformation on both fault blocks, even though only one fault block is moved. The stepover angle, the influence of gravitation in a ramp or inclined system and the strain accomodation by a viscous silicone layer can be summarized as factors controlling the characteristics of the models.
Deformation pattern during normal faulting: A sequential limit analysis
NASA Astrophysics Data System (ADS)
Yuan, X. P.; Maillot, B.; Leroy, Y. M.
2017-02-01
We model in 2-D the formation and development of half-graben faults above a low-angle normal detachment fault. The model, based on a "sequential limit analysis" accounting for mechanical equilibrium and energy dissipation, simulates the incremental deformation of a frictional, cohesive, and fluid-saturated rock wedge above the detachment. Two modes of deformation, gravitational collapse and tectonic collapse, are revealed which compare well with the results of the critical Coulomb wedge theory. We additionally show that the fault and the axial surface of the half-graben rotate as topographic subsidence increases. This progressive rotation makes some of the footwall material being sheared and entering into the hanging wall, creating a specific region called foot-to-hanging wall (FHW). The model allows introducing additional effects, such as weakening of the faults once they have slipped and sedimentation in their hanging wall. These processes are shown to control the size of the FHW region and the number of fault-bounded blocks it eventually contains. Fault weakening tends to make fault rotation more discontinuous and this results in the FHW zone containing multiple blocks of intact material separated by faults. By compensating the topographic subsidence of the half-graben, sedimentation tends to slow the fault rotation and this results in the reduction of the size of the FHW zone and of its number of fault-bounded blocks. We apply the new approach to reproduce the faults observed along a seismic line in the Southern Jeanne d'Arc Basin, Grand Banks, offshore Newfoundland. There, a single block exists in the hanging wall of the principal fault. The model explains well this situation provided that a slow sedimentation rate in the Lower Jurassic is proposed followed by an increasing rate over time as the main detachment fault was growing.
Ji, C.; Helmberger, D.V.; Wald, D.J.
2004-01-01
Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.
The mechanics of fault-bend folding and tear-fault systems in the Niger Delta
NASA Astrophysics Data System (ADS)
Benesh, Nathan Philip
This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new map-based structural restoration techniques, we find that the tear faults have distinct displacement patterns that distinguish them from conventional strike-slip faults and reflect their roles in accommodating displacement gradients within the fold-and-thrust belt.
Zeng, Yuehua; Shen, Zheng-Kang
2016-01-01
We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22 mm/yr along Santa Cruz to the North Coast, 25–28 mm/yr along the central California creeping segment to the Carrizo Plain, 20–22 mm/yr along the Mojave, and 20–24 mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16 mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9 mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9 mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7 mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9 mm/yr across its east–west transects, which is ∼1 mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5 mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019 N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019 N·m/yr, which is a 16% increase compared with the UCERF2 model.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
NASA Astrophysics Data System (ADS)
Liu, Chun; Jiang, Bin; Zhang, Ke
2018-03-01
This paper investigates the attitude and position tracking control problem for Lead-Wing close formation systems in the presence of loss of effectiveness and lock-in-place or hardover failure. In close formation flight, Wing unmanned aerial vehicle movements are influenced by vortex effects of the neighbouring Lead unmanned aerial vehicle. This situation allows modelling of aerodynamic coupling vortex-effects and linearisation based on optimal close formation geometry. Linearised Lead-Wing close formation model is transformed into nominal robust H-infinity models with respect to Mach hold, Heading hold, and Altitude hold autopilots; static feedback H-infinity controller is designed to guarantee effective tracking of attitude and position while manoeuvring Lead unmanned aerial vehicle. Based on H-infinity control design, an integrated multiple-model adaptive fault identification and reconfigurable fault-tolerant control scheme is developed to guarantee asymptotic stability of close-loop systems, error signal boundedness, and attitude and position tracking properties. Simulation results for Lead-Wing close formation systems validate the efficiency of the proposed integrated multiple-model adaptive control algorithm.
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
Distributed bearing fault diagnosis based on vibration analysis
NASA Astrophysics Data System (ADS)
Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani
2016-01-01
Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.
Ding, Ming; Zhu, Qianlong
2016-01-01
Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.
Mapping apparent stress and energy radiation over fault zones of major earthquakes
McGarr, A.; Fletcher, Joe B.
2002-01-01
Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M.
2000-04-01
This project is the first evaluation of model-based diagnostics to hydraulic robot systems. A greater understanding of fault detection for hydraulic robots has been gained, and a new theoretical fault detection model developed and evaluated.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.
Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System
Yuan, Xianfeng; Song, Mumin; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526
Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines
NASA Astrophysics Data System (ADS)
Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin
2018-03-01
In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.
Modelling induced seismicity due to fluid injection
NASA Astrophysics Data System (ADS)
Murphy, S.; O'Brien, G. S.; Bean, C. J.; McCloskey, J.; Nalbant, S. S.
2011-12-01
Injection of fluid into the subsurface alters the stress in the crust and can induce earthquakes. The science of assessing the risk of induced seismicity from such ventures is still in its infancy despite public concern. We plan to use a fault network model in which stress perturbations due to fluid injection induce earthquakes. We will use this model to investigate the role different operational and geological factors play in increasing seismicity in a fault system due to fluid injection. The model is based on a quasi-dynamic relationship between stress and slip coupled with a rate and state fiction law. This allows us to model slip on fault interfaces over long periods of time (i.e. years to 100's years). With the use of the rate and state friction law the nature of stress release during slipping can be altered through variation of the frictional parameters. Both seismic and aseismic slip can therefore be simulated. In order to add heterogeneity along the fault plane a fractal variation in the frictional parameters is used. Fluid injection is simulated using the lattice Boltzmann method whereby pore pressure diffuses throughout a permeable layer from the point of injection. The stress perturbation this causes on the surrounding fault system is calculated using a quasi-static solution for slip dislocation in an elastic half space. From this model we can generate slip histories and seismicity catalogues covering 100's of years for predefined fault networks near fluid injection sites. Given that rupture is a highly non-linear process, comparison between models with different input parameters (e.g. fault network statistics and injection rates) will be based on system wide features (such as the Gutenberg-Richter b-values), rather than specific seismic events. Our ultimate aim is that our model produces seismic catalogues similar to those observed over real injection sites. Such validation would pave the way to probabilistic estimation of reactivation risk for injection sites using such models. Preliminary results from this model will be presented.
A method for diagnosing time dependent faults using model-based reasoning systems
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.
1995-01-01
This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.
Abbaspour, Alireza; Aboutalebi, Payam; Yen, Kang K; Sargolzaei, Arman
2017-03-01
A new online detection strategy is developed to detect faults in sensors and actuators of unmanned aerial vehicle (UAV) systems. In this design, the weighting parameters of the Neural Network (NN) are updated by using the Extended Kalman Filter (EKF). Online adaptation of these weighting parameters helps to detect abrupt, intermittent, and incipient faults accurately. We apply the proposed fault detection system to a nonlinear dynamic model of the WVU YF-22 unmanned aircraft for its evaluation. The simulation results show that the new method has better performance in comparison with conventional recurrent neural network-based fault detection strategies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Design of on-board Bluetooth wireless network system based on fault-tolerant technology
NASA Astrophysics Data System (ADS)
You, Zheng; Zhang, Xiangqi; Yu, Shijie; Tian, Hexiang
2007-11-01
In this paper, the Bluetooth wireless data transmission technology is applied in on-board computer system, to realize wireless data transmission between peripherals of the micro-satellite integrating electronic system, and in view of the high demand of reliability of a micro-satellite, a design of Bluetooth wireless network based on fault-tolerant technology is introduced. The reliability of two fault-tolerant systems is estimated firstly using Markov model, then the structural design of this fault-tolerant system is introduced; several protocols are established to make the system operate correctly, some related problems are listed and analyzed, with emphasis on Fault Auto-diagnosis System, Active-standby switch design and Data-Integrity process.
A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis
Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon
2017-01-01
Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs). PMID:29232908
NASA Astrophysics Data System (ADS)
Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.
2017-12-01
We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.
NASA Astrophysics Data System (ADS)
Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten
2018-05-01
The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results imply that the combination of fully 3D dynamic modelling, complex fault geometries, and off-fault plastic yielding is important to realistically capture dynamic rupture transfers in natural fault systems.
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhou, L.; Liu, J.
2013-12-01
Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity
Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang
2018-06-14
Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
ADEPT is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system, and is designed for two modes of operation: real-time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a Laser printer. This system consists of a simulated Space Station power module using direct-current power supplies for Solar arrays on three power busses. For tests of the system's ability to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three busses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modelling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base. A load scheduler and a fault recovery system are currently under development to support both modes of operation.
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.
2014-12-01
Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.
Shi, Xiaojie; Wang, Zhiqiang; Liu, Bo; ...
2014-05-16
This paper presents the analysis and control of a multilevel modular converter (MMC)-based HVDC transmission system under three possible single-line-to-ground fault conditions, with special focus on the investigation of their different fault characteristics. Considering positive-, negative-, and zero-sequence components in both arm voltages and currents, the generalized instantaneous power of a phase unit is derived theoretically according to the equivalent circuit model of the MMC under unbalanced conditions. Based on this model, a novel double-line frequency dc-voltage ripple suppression control is proposed. This controller, together with the negative-and zero-sequence current control, could enhance the overall fault-tolerant capability of the HVDCmore » system without additional cost. To further improve the fault-tolerant capability, the operation performance of the HVDC system with and without single-phase switching is discussed and compared in detail. Lastly, simulation results from a three-phase MMC-HVDC system generated with MATLAB/Simulink are provided to support the theoretical analysis and proposed control schemes.« less
NASA Astrophysics Data System (ADS)
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
Modeling and characterization of partially inserted electrical connector faults
NASA Astrophysics Data System (ADS)
Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.
2016-03-01
Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.
Kinematics of shallow backthrusts in the Seattle fault zone, Washington State
Pratt, Thomas L.; Troost, K.G.; Odum, Jackson K.; Stephenson, William J.
2015-01-01
Near-surface thrust fault splays and antithetic backthrusts at the tips of major thrust fault systems can distribute slip across multiple shallow fault strands, complicating earthquake hazard analyses based on studies of surface faulting. The shallow expression of the fault strands forming the Seattle fault zone of Washington State shows the structural relationships and interactions between such fault strands. Paleoseismic studies document an ∼7000 yr history of earthquakes on multiple faults within the Seattle fault zone, with some backthrusts inferred to rupture in small (M ∼5.5–6.0) earthquakes at times other than during earthquakes on the main thrust faults. We interpret seismic-reflection profiles to show three main thrust faults, one of which is a blind thrust fault directly beneath downtown Seattle, and four small backthrusts within the Seattle fault zone. We then model fault slip, constrained by shallow deformation, to show that the Seattle fault forms a fault propagation fold rather than the alternatively proposed roof thrust system. Fault slip modeling shows that back-thrust ruptures driven by moderate (M ∼6.5–6.7) earthquakes on the main thrust faults are consistent with the paleoseismic data. The results indicate that paleoseismic data from the back-thrust ruptures reveal the times of moderate earthquakes on the main fault system, rather than indicating smaller (M ∼5.5–6.0) earthquakes involving only the backthrusts. Estimates of cumulative shortening during known Seattle fault zone earthquakes support the inference that the Seattle fault has been the major seismic hazard in the northern Cascadia forearc in the late Holocene.
Phase response curves for models of earthquake fault dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franović, Igor, E-mail: franovic@ipb.ac.rs; Kostić, Srdjan; Perc, Matjaž
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how themore » profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.« less
Renewal models and coseismic stress transfer in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Falcone, Giuseppe; Karakostas, Vassilis; Murru, Maura; Papadimitriou, Eleftheria; Rhoades, David
2013-07-01
model interevent times and Coulomb static stress transfer on the rupture segments along the Corinth Gulf extension zone, a region with a wealth of observations on strong-earthquake recurrence behavior. From the available information on past seismic activity, we have identified eight segments without significant overlapping that are aligned along the southern boundary of the Corinth rift. We aim to test if strong earthquakes on these segments are characterized by some kind of time-predictable behavior, rather than by complete randomness. The rationale for time-predictable behavior is based on the characteristic earthquake hypothesis, the necessary ingredients of which are a known faulting geometry and slip rate. The tectonic loading rate is characterized by slip of 6 mm/yr on the westernmost fault segment, diminishing to 4 mm/yr on the easternmost segment, based on the most reliable geodetic data. In this study, we employ statistical and physical modeling to account for stress transfer among these fault segments. The statistical modeling is based on the definition of a probability density distribution of the interevent times for each segment. Both the Brownian Passage-Time (BPT) and Weibull distributions are tested. The time-dependent hazard rate thus obtained is then modified by the inclusion of a permanent physical effect due to the Coulomb static stress change caused by failure of neighboring faults since the latest characteristic earthquake on the fault of interest. The validity of the renewal model is assessed retrospectively, using the data of the last 300 years, by comparison with a plain time-independent Poisson model, by means of statistical tools including the Relative Operating Characteristic diagram, the R-score, the probability gain and the log-likelihood ratio. We treat the uncertainties in the parameters of each examined fault source, such as linear dimensions, depth of the fault center, focal mechanism, recurrence time, coseismic slip, and aperiodicity of the statistical distribution, by a Monte Carlo technique. The Monte Carlo samples for all these parameters are drawn from a uniform distribution within their uncertainty limits. We find that the BPT and the Weibull renewal models yield comparable results, and both of them perform significantly better than the Poisson hypothesis. No clear performance enhancement is achieved by the introduction of the Coulomb static stress change into the renewal model.
NASA Astrophysics Data System (ADS)
Bi, Haiyun; Zheng, Wenjun; Ge, Weipeng; Zhang, Peizhen; Zeng, Jiangyuan; Yu, Jingxing
2018-03-01
Reconstruction of the along-fault slip distribution provides an insight into the long-term rupture patterns of a fault, thereby enabling more accurate assessment of its future behavior. The increasing wealth of high-resolution topographic data, such as Light Detection and Ranging and photogrammetric digital elevation models, allows us to better constrain the slip distribution, thus greatly improving our understanding of fault behavior. The South Heli Shan Fault is a major active fault on the northeastern margin of the Tibetan Plateau. In this study, we built a 2 m resolution digital elevation model of the South Heli Shan Fault based on high-resolution GeoEye-1 stereo satellite imagery and then measured 302 vertical displacements along the fault, which increased the measurement density of previous field surveys by a factor of nearly 5. The cumulative displacements show an asymmetric distribution along the fault, comprising three major segments. An increasing trend from west to east indicates that the fault has likely propagated westward over its lifetime. The topographic relief of Heli Shan shows an asymmetry similar to the measured cumulative slip distribution, suggesting that the uplift of Heli Shan may result mainly from the long-term activity of the South Heli Shan Fault. Furthermore, the cumulative displacements divide into discrete clusters along the fault, indicating that the fault has ruptured in several large earthquakes. By constraining the slip-length distribution of each rupture, we found that the events do not support a characteristic recurrence model for the fault.
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.; Jachens, R. C.; Ponce, D. A.; Phelps, G. A.; Watt, J. T.; Wentworth, C. M.
2007-12-01
For the purpose of estimating seismic hazard, the Calaveras and Hayward Faults have been considered as separate structures and analyzed and segmented based largely on their surface-trace geometry and the extent of the 1868 Hayward Fault earthquake. Recent relocations of earthquakes and 3-D geologic mapping have shown, however, that at depths associated with large earthquakes (>5 km) the fault geology and geometry is quite different than that at the surface. Using deep fault geometry inferred from these studies we treat the Hayward and Calaveras Faults as a single system and divide the system into segments that differ from the previously accepted segments as follows: 1. The Hayward Fault connects directly to the central Calaveras Fault at depth, as opposed to the 5 km wide restraining stepover zone of multiple imbricate oblique right-lateral reverse faults at the surface east of Fremont and San Jose (between about 37.25°-37.6°N). 2. The segment boundary between the Hayward, central Calaveras, and northern Calaveras is based on their Y- shaped intersection at depth near 37.40°N, 121.76°W (Cherry Flat Reservoir), about 8 km south of the previously accepted central-northern Calaveras Fault segment boundary. 3. The central Calaveras Fault is divided near 37.14°N, 121.56°W (southern end of Anderson Lake) into two subsegments based on a large discontinuity at depth seen in relocated seismicity. 4. The Hayward Fault is divided near 37.85°N, 122.23°W (Lake Temescal) into two segments based on a large contrast in fault face geology. This segmentation is similar to that based on the extent of 1868 fault rupture, but is now related to an underlying geologic cause. The direct connection of the Hayward and central Calaveras Faults at depth suggests that earthquakes larger than those previously modeled should be considered (~M6.9 for the southern Hayward, ~M7.2 for the southern Hayward plus northern central Calaveras). A NEHRP study by Witter and others (2003; NEHRP 03HQGR0098) suggested evidence for large surface ruptures on the northern central Calaveras, but that work is not peer-reviewed and there is little or no other paleoseismic or geodetic data from the stepover zone or northern central Calaveras Fault (all commonly cited data are from the southern central Calaveras Fault), so the sparse surface data neither demands nor precludes our interpretation. The additional segmentation of the central Calaveras Fault proposed here may explain the observation that this segment seems to generate characteristic moderate (~M6.0-6.5) earthquakes rather than the larger ~M6.9 earthquakes that could be generated by rupture of the previously defined longer central Calaveras segment. Better information regarding fault plane geometry and 3-D distribution of rock properties adjacent to the faults at seismogenic depths should help us revise proposed segmentation models of other faults for seismic hazard analyses.
NASA Technical Reports Server (NTRS)
Rogers, William H.; Schutte, Paul C.
1993-01-01
Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.
NASA Astrophysics Data System (ADS)
Riegel, H. B.; Zambrano, M.; Jablonska, D.; Emanuele, T.; Agosta, F.; Mattioni, L.; Rustichelli, A.
2017-12-01
The hydraulic properties of fault zones depend upon the individual contributions of the damage zone and the fault core. In the case of the damage zone, it is generally characterized by means of fracture analysis and modelling implementing multiple approaches, for instance the discrete fracture network model, the continuum model, and the channel network model. Conversely, the fault core is more difficult to characterize because it is normally composed of fine grain material generated by friction and wear. If the dimensions of the fault core allows it, the porosity and permeability are normally studied by means of laboratory analysis or in the other case by two dimensional microporosity analysis and in situ measurements of permeability (e.g. micro-permeameter). In this study, a combined approach consisting of fracture modeling, three-dimensional microporosity analysis, and computational fluid dynamics was applied to characterize the hydraulic properties of fault zones. The studied fault zones crosscut a well-cemented heterolithic succession (sandstone and mudstones) and may vary in terms of fault core thickness and composition, fracture properties, kinematics (normal or strike-slip), and displacement. These characteristics produce various splay and fault core behavior. The alternation of sandstone and mudstone layers is responsible for the concurrent occurrence of brittle (fractures) and ductile (clay smearing) deformation. When these alternating layers are faulted, they produce corresponding fault cores which act as conduits or barriers for fluid migration. When analyzing damage zones, accurate field and data acquisition and stochastic modeling was used to determine the hydraulic properties of the rock volume, in relation to the surrounding, undamaged host rock. In the fault cores, the three-dimensional pore network quantitative analysis based on X-ray microtomography images includes porosity, pore connectivity, and specific surface area. In addition, images were used to perform computational fluid simulation (Lattice-Boltzmann multi relaxation time method) and estimate the permeability. These results will be useful for understanding the deformation process and hydraulic properties across meter-scale damage zones.
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Intelligent Operation and Maintenance of Micro-grid Technology and System Development
NASA Astrophysics Data System (ADS)
Fu, Ming; Song, Jinyan; Zhao, Jingtao; Du, Jian
2018-01-01
In order to achieve the micro-grid operation and management, Studying the micro-grid operation and maintenance knowledge base. Based on the advanced Petri net theory, the fault diagnosis model of micro-grid is established, and the intelligent diagnosis and analysis method of micro-grid fault is put forward. Based on the technology, the functional system and architecture of the intelligent operation and maintenance system of micro-grid are studied, and the microcomputer fault diagnosis function is introduced in detail. Finally, the system is deployed based on the micro-grid of a park, and the micro-grid fault diagnosis and analysis is carried out based on the micro-grid operation. The system operation and maintenance function interface is displayed, which verifies the correctness and reliability of the system.
Orientation Effects in Fault Reactivation in Geological CO2 Sequestration
NASA Astrophysics Data System (ADS)
Castelletto, N.; Ferronato, M.; Gambolati, G.; Janna, C.; Teatini, P.
2012-12-01
Geological CO2 sequestration remains one of the most promising option for reducing the greenhouse gases emission. The accurate simulation of the complex coupled physical processes occurring during the injection and the post-injection stage represents a key issue for investigating the feasibility and the safety of the sequestration. The fluid-dynamical and geochemical aspects related to sequestering CO2 underground have been widely debated in the scientific literature over more than one decade. Recently, the importance of geomechanical processes has been widely recognized. In the present modeling study, we focus on fault reactivation induced by injection, an essential aspect for the evaluation of CO2 sequestration projects that needs to be adequately investigated to avoid the generation of preferential leaking path for CO2 and the related risk of induced seismicity. We use a geomechanical model based on the structural equations of poroelasticity solved by the Finite Element (FE) - Interface Element (IE) approach. Standard FEs are used to represent a continuum, while IEs prove especially suited to assess the relative displacements of adjacent elements such as the opening and slippage of existing faults or the generation of new fractures [1]. The IEs allow for the modeling of fault mechanics using an elasto-plastic constitutive law based on the Mohr-Coulomb failure criterion. We analyze the reactivation of a single fault in a synthetic reservoir by varying the fault orientation and size, hydraulic conductivity of the faulted zone, initial vertical and horizontal stress state and Mohr-Coulomb parameters (i.e., friction angle and cohesion). References: [1] Ferronato, M., G. Gambolati, C. Janna, and P. Teatini (2008), Numerical modeling of regional faults in land subsidence prediction above gas/oil reservoirs, Int. J. Numer. Anal. Methods Geomech., 32, 633-657.
The Role of Deep Creep in the Timing of Large Earthquakes
NASA Astrophysics Data System (ADS)
Sammis, C. G.; Smith, S. W.
2012-12-01
The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.
The co-seismic slip distribution of the Landers earthquake
Freymueller, J.; King, N.E.; Segall, P.
1994-01-01
We derived a model for the co-seismic slip distribution on the faults which ruptured during the Landers earthquake sequence of 28 June 1992. The model is based on the inversion of surface geodetic measurements, primarily vector displacements measured using the Global Positioning System (GPS). The inversion procedure assumes that the slip distribution is to some extent smooth and purely right-lateral strike slip. For a given fault geometry, a family of solutions of varying smoothness can be generated.We choose the optimal model from this family based on cross-validation, which measures the predictive power of the data, and the trade-off of misfit and roughness. Solutions which give roughly equal weight to misfit and smoothness are preferred and have certain features in common: (1) there are two main patches of slip, on the Johnson Valley fault, and on the Homestead Valley, Emerson, and Camp Rock faults; (2) virtually all slip is in the upper 10 to 12 km; and (3) the model reproduces the general features of the geologically measured surface displacements, without prior constraints on the surface slip. In all models, regardless of smoothing, very little slip is required on the fault that represents the Big Bear event, and the total moment of the Landers event is 9 · 1019 N-m. The nearly simultaneous rupture of multiple distinct faults suggests that much of the crust in this region must have been close to failure prior to the earthquake.
NASA Astrophysics Data System (ADS)
Paya, B. A.; Esat, I. I.; Badi, M. N. M.
1997-09-01
The purpose of condition monitoring and fault diagnostics are to detect and distinguish faults occurring in machinery, in order to provide a significant improvement in plant economy, reduce operational and maintenance costs and improve the level of safety. The condition of a model drive-line, consisting of various interconnected rotating parts, including an actual vehicle gearbox, two bearing housings, and an electric motor, all connected via flexible couplings and loaded by a disc brake, was investigated. This model drive-line was run in its normal condition, and then single and multiple faults were introduced intentionally to the gearbox, and to the one of the bearing housings. These single and multiple faults studied on the drive-line were typical bearing and gear faults which may develop during normal and continuous operation of this kind of rotating machinery. This paper presents the investigation carried out in order to study both bearing and gear faults introduced first separately as a single fault and then together as multiple faults to the drive-line. The real time domain vibration signals obtained for the drive-line were preprocessed by wavelet transforms for the neural network to perform fault detection and identify the exact kinds of fault occurring in the model drive-line. It is shown that by using multilayer artificial neural networks on the sets of preprocessed data by wavelet transforms, single and multiple faults were successfully detected and classified into distinct groups.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abe, Steffen; Krieger, Lars; Deckert, Hagen
2017-04-01
The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.
Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
An Integrated Crustal Dynamics Simulator
NASA Astrophysics Data System (ADS)
Xing, H. L.; Mora, P.
2007-12-01
Numerical modelling offers an outstanding opportunity to gain an understanding of the crustal dynamics and complex crustal system behaviour. This presentation provides our long-term and ongoing effort on finite element based computational model and software development to simulate the interacting fault system for earthquake forecasting. A R-minimum strategy based finite-element computational model and software tool, PANDAS, for modelling 3-dimensional nonlinear frictional contact behaviour between multiple deformable bodies with the arbitrarily-shaped contact element strategy has been developed by the authors, which builds up a virtual laboratory to simulate interacting fault systems including crustal boundary conditions and various nonlinearities (e.g. from frictional contact, materials, geometry and thermal coupling). It has been successfully applied to large scale computing of the complex nonlinear phenomena in the non-continuum media involving the nonlinear frictional instability, multiple material properties and complex geometries on supercomputers, such as the South Australia (SA) interacting fault system, South California fault model and Sumatra subduction model. It has been also extended and to simulate the hot fractured rock (HFR) geothermal reservoir system in collaboration of Geodynamics Ltd which is constructing the first geothermal reservoir system in Australia and to model the tsunami generation induced by earthquakes. Both are supported by Australian Research Council.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
A comparative study of sensor fault diagnosis methods based on observer for ECAS system
NASA Astrophysics Data System (ADS)
Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli
2017-03-01
The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.
Formal Techniques for Synchronized Fault-Tolerant Systems
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Butler, Ricky W.
1992-01-01
We present the formal verification of synchronizing aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization is based on an extended state machine model incorporating snapshots of local processors clocks.
Fault Diagnosis Strategies for SOFC-Based Power Generation Plants
Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea
2016-01-01
The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472
Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results
NASA Astrophysics Data System (ADS)
Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.
2013-12-01
Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.
SCADA-based Operator Support System for Power Plant Equipment Fault Forecasting
NASA Astrophysics Data System (ADS)
Mayadevi, N.; Ushakumari, S. S.; Vinodchandra, S. S.
2014-12-01
Power plant equipment must be monitored closely to prevent failures from disrupting plant availability. Online monitoring technology integrated with hybrid forecasting techniques can be used to prevent plant equipment faults. A self learning rule-based expert system is proposed in this paper for fault forecasting in power plants controlled by supervisory control and data acquisition (SCADA) system. Self-learning utilizes associative data mining algorithms on the SCADA history database to form new rules that can dynamically update the knowledge base of the rule-based expert system. In this study, a number of popular associative learning algorithms are considered for rule formation. Data mining results show that the Tertius algorithm is best suited for developing a learning engine for power plants. For real-time monitoring of the plant condition, graphical models are constructed by K-means clustering. To build a time-series forecasting model, a multi layer preceptron (MLP) is used. Once created, the models are updated in the model library to provide an adaptive environment for the proposed system. Graphical user interface (GUI) illustrates the variation of all sensor values affecting a particular alarm/fault, as well as the step-by-step procedure for avoiding critical situations and consequent plant shutdown. The forecasting performance is evaluated by computing the mean absolute error and root mean square error of the predictions.
Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Patterson-Hine, Ann
2003-01-01
Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.
Power flow analysis and optimal locations of resistive type superconducting fault current limiters.
Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A
2016-01-01
Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.
An earthquake instability model based on faults containing high fluid-pressure compartments
Lockner, D.A.; Byerlee, J.D.
1995-01-01
It has been proposed that large strike-slip faults such as the San Andreas contain water in seal-bounded compartments. Arguments based on heat flow and stress orientation suggest that in most of the compartments, the water pressure is so high that the average shear strength of the fault is less than 20 MPa. We propose a variation of this basic model in which most of the shear stress on the fault is supported by a small number of compartments where the pore pressure is relatively low. As a result, the fault gouge in these compartments is compacted and lithified and has a high undisturbed strength. When one of these locked regions fails, the system made up of the neighboring high and low pressure compartments can become unstable. Material in the high fluid pressure compartments is initially underconsolidated since the low effective confining pressure has retarded compaction. As these compartments are deformed, fluid pressure remains nearly unchanged so that they offer little resistance to shear. The low pore pressure compartments, however, are overconsolidated and dilate as they are sheared. Decompression of the pore fluid in these compartments lowers fluid pressure, increasing effective normal stress and shear strength. While this effect tends to stabilize the fault, it can be shown that this dilatancy hardening can be more than offset by displacement weakening of the fault (i.e., the drop from peak to residual strength). If the surrounding rock mass is sufficiently compliant to produce an instability, slip will propagate along the fault until the shear fracture runs into a low-stress region. Frictional heating and the accompanying increase in fluid pressure that are suggested to occur during shearing of the fault zone will act as additional destabilizers. However, significant heating occurs only after a finite amount of slip and therefore is more likely to contribute to the energetics of rupture propagation than to the initiation of the instability. We present results of a one-dimensional dynamic Burridge-Knopoff-type model to demonstrate various aspects of the fluid-assisted fault instability described above. In the numerical model, the fault is represented by a series of blocks and springs, with fault rheology expressed by static and dynamic friction. In addition, the fault surface of each block has associated with it pore pressure, porosity and permeability. All of these variables are allowed to evolve with time, resulting in a wide range of phenomena related to fluid diffusion, dilatancy, compaction and heating. These phenomena include creep events, diffusion-controlled precursors, triggered earthquakes, foreshocks, aftershocks, and multiple earthquakes. While the simulations have limitations inherent to 1-D fault models, they demonstrate that the fluid compartment model can, in principle, provide the rich assortment of phenomena that have been associated with earthquakes. ?? 1995 Birkha??user Verlag.
From Geodetic Imaging of Seismic and Aseismic Fault Slip to Dynamic Modeling of the Seismic Cycle
NASA Astrophysics Data System (ADS)
Avouac, Jean-Philippe
2015-05-01
Understanding the partitioning of seismic and aseismic fault slip is central to seismotectonics as it ultimately determines the seismic potential of faults. Thanks to advances in tectonic geodesy, it is now possible to develop kinematic models of the spatiotemporal evolution of slip over the seismic cycle and to determine the budget of seismic and aseismic slip. Studies of subduction zones and continental faults have shown that aseismic creep is common and sometimes prevalent within the seismogenic depth range. Interseismic coupling is generally observed to be spatially heterogeneous, defining locked patches of stress accumulation, to be released in future earthquakes or aseismic transients, surrounded by creeping areas. Clay-rich tectonites, high temperature, and elevated pore-fluid pressure seem to be key factors promoting aseismic creep. The generally logarithmic time evolution of afterslip is a distinctive feature of creeping faults that suggests a logarithmic dependency of fault friction on slip rate, as observed in laboratory friction experiments. Most faults can be considered to be paved with interlaced patches where the friction law is either rate-strengthening, inhibiting seismic rupture propagation, or rate-weakening, allowing for earthquake nucleation. The rate-weakening patches act as asperities on which stress builds up in the interseismic period; they might rupture collectively in a variety of ways. The pattern of interseismic coupling can help constrain the return period of the maximum- magnitude earthquake based on the requirement that seismic and aseismic slip sum to match long-term slip. Dynamic models of the seismic cycle based on this conceptual model can be tuned to reproduce geodetic and seismological observations. The promise and pitfalls of using such models to assess seismic hazard are discussed.
NASA Astrophysics Data System (ADS)
Ulrich, Thomas; Gabriel, Alice-Agnes
2017-04-01
Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial stress conditions on a planar fault, is demonstrated. Furthermore, we show that rupture extend, rupture front coherency and rupture speed are highly dependent on the initial amplitude of stress acting on the fault, defined by the normalized prestress factor R, the ratio of the potential stress drop over the breakdown stress drop. The effects of fault complexity are particularly pronounced for lower R. By low-pass filtering a rough fault at several cut-off wavelengths, we then try to capture rupture complexity using a simplified fault geometry. We find that equivalent source dynamics can only be obtained using a scarcely filtered fault associated with a reduced stress level. To investigate the wavelength-dependent roughness effect, the fault geometry is bandpass-filtered over several spectral ranges. We show that geometric fluctuations cause rupture velocity fluctuations of similar length scale. The impact of fault geometry is especially pronounced when the rupture front velocity is near supershear. Roughness fluctuations significantly smaller than the rupture front characteristic dimension (cohesive zone size) affect only macroscopic rupture properties, thus, posing a minimum length scale limiting the required resolution of 3D fault complexity. Lastly, the effect of fault curvature and roughness on the simulated ground-motions is assessed. Despite employing a simple linear slip weakening friction law, the simulated ground-motions compare well with estimates from ground motions prediction equations, even at relatively high frequencies.
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Singh, Dharmendra
2016-04-01
Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.
Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.
Housh, Mashor; Ohar, Ziv
2017-03-01
The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Shuoting; Liu, Bo; Zheng, Sheng; ...
2018-01-01
A transmission line emulator has been developed to flexibly represent interconnected ac lines under normal operating conditions in a voltage source converter (VSC)-based power system emulation platform. As the most serious short-circuit fault condition, the three-phase short-circuit fault emulation is essential for power system studies. Here, this paper proposes a model to realize a three-phase short-circuit fault emulation at different locations along a single transmission line or one of several parallel-connected transmission lines. At the same time, a combination method is proposed to eliminate the undesired transients caused by the current reference step changes while switching between the fault statemore » and the normal state. Experiment results verify the developed transmission line three-phase short-circuit fault emulation capability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Shuoting; Liu, Bo; Zheng, Sheng
A transmission line emulator has been developed to flexibly represent interconnected ac lines under normal operating conditions in a voltage source converter (VSC)-based power system emulation platform. As the most serious short-circuit fault condition, the three-phase short-circuit fault emulation is essential for power system studies. Here, this paper proposes a model to realize a three-phase short-circuit fault emulation at different locations along a single transmission line or one of several parallel-connected transmission lines. At the same time, a combination method is proposed to eliminate the undesired transients caused by the current reference step changes while switching between the fault statemore » and the normal state. Experiment results verify the developed transmission line three-phase short-circuit fault emulation capability.« less
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Obando, Rodrigo A.
1993-01-01
The modeling and design of a fault-tolerant multiprocessor system is addressed. In particular, the behavior of the system during recovery and restoration after a fault has occurred is investigated. Given that a multicomputer system is designed using the Algorithm to Architecture to Mapping Model (ATAMM), and that a fault (death of a computing resource) occurs during its normal steady-state operation, a model is presented as a viable research tool for predicting the performance bounds of the system during its recovery and restoration phases. Furthermore, the bounds of the performance behavior of the system during this transient mode can be assessed. These bounds include: time to recover from the fault (t(sub rec)), time to restore the system (t(sub rec)) and whether there is a permanent delay in the system's Time Between Input and Output (TBIO) after the system has reached a steady state. An implementation of an ATAMM based computer was developed with the Generic VHSIC Spaceborne Computer (GVSC) as the target system. A simulation of the GVSC was also written based on the code used in ATAMM Multicomputer Operating System (AMOS). The simulation is in turn used to validate the new model in the usefulness and accuracy in tracking the propagation of the delay through the system and predicting the behavior in the transient state of recovery and restoration. The model is validated as an accurate method to predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.
Current Sensor Fault Reconstruction for PMSM Drives
Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan
2016-01-01
This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317
NASA Technical Reports Server (NTRS)
Richard, Stephen M.
1992-01-01
A paleogeographic reconstruction of southeastern California and southwestern Arizona at 10 Ma was made based on available geologic and geophysical data. Clockwise rotation of 39 deg was reconstructed in the eastern Transverse Ranges, consistent with paleomagnetic data from late Miocene volcanic rocks, and with slip estimates for left-lateral faults within the eastern Transverse Ranges and NW-trending right lateral faults in the Mojave Desert. This domain of rotated rocks is bounded by the Pinto Mountain fault on the north. In the absence of evidence for rotation of the San Bernardino Mountains or for significant right slip faults within the San Bernardino Mountains, the model requires that the late Miocene Pinto Mountain fault become a thrust fault gaining displacement to the west. The Squaw Peak thrust system of Meisling and Weldon may be a western continuation of this fault system. The Sheep Hole fault bounds the rotating domain on the east. East of this fault an array of NW-trending right slip faults and south-trending extensional transfer zones has produced a basin and range physiography while accumulating up to 14 km of right slip. This maximum is significantly less than the 37.5 km of right slip required in this region by a recent reconstruction of the central Mojave Desert. Geologic relations along the southern boundary of the rotating domain are poorly known, but this boundary is interpreted to involve a series of curved strike slip faults and non-coaxial extension, bounded on the southeast by the Mammoth Wash and related faults in the eastern Chocolate Mountains. Available constraints on timing suggest that Quaternary movement on the Pinto Mountain and nearby faults is unrelated to the rotation of the eastern Transverse Ranges, and was preceded by a hiatus during part of Pliocene time which followed the deformation producing the rotation. The reconstructed Clemens Well fault in the Orocopia Mountains, proposed as a major early Miocene strand of the San Andreas fault, projects eastward towards Arizona, where early Miocene rocks and structures are continuous across its trace. The model predicts a 14 deg clockwise rotation and 55 km extension along the present trace of the San Andreas fault during late Miocene and early Pliocene time. Palinspastic reconstructions of the San Andreas system based on this proposed reconstruction may be significantly modified from current models.
Investigation of advanced fault insertion and simulator methods
NASA Technical Reports Server (NTRS)
Dunn, W. R.; Cottrell, D.
1986-01-01
The cooperative agreement partly supported research leading to the open-literature publication cited. Additional efforts under the agreement included research into fault modeling of semiconductor devices. Results of this research are presented in this report which is summarized in the following paragraphs. As a result of the cited research, it appears that semiconductor failure mechanism data is abundant but of little use in developing pin-level device models. Failure mode data on the other hand does exist but is too sparse to be of any statistical use in developing fault models. What is significant in the failure mode data is that, unlike classical logic, MSI and LSI devices do exhibit more than 'stuck-at' and open/short failure modes. Specifically they are dominated by parametric failures and functional anomalies that can include intermittent faults and multiple-pin failures. The report discusses methods of developing composite pin-level models based on extrapolation of semiconductor device failure mechanisms, failure modes, results of temperature stress testing and functional modeling. Limitations of this model particularly with regard to determination of fault detection coverage and latency time measurement are discussed. Indicated research directions are presented.
Pantea, Michael P.; Cole, James C.
2004-01-01
This report describes a digital, three-dimensional faulted hydrostratigraphic model constructed to represent the geologic framework of the Edwards aquifer system in the area of San Antonio, northern Bexar County, Texas. The model is based on mapped geologic relationships that reflect the complex structures of the Balcones fault zone, detailed lithologic descriptions and interpretations of about 40 principal wells (and qualified data from numerous other wells), and a conceptual model of the gross geometry of the Edwards Group units derived from prior interpretations of depositional environments and paleogeography. The digital model depicts the complicated intersections of numerous major and minor faults in the subsurface, as well as their individual and collective impacts on the continuity of the aquifer-forming units of the Edwards Group and the Georgetown Formation. The model allows for detailed examination of the extent of fault dislocation from place to place, and thus the extent to which the effective cross-sectional area of the aquifer is reduced by faulting. The model also depicts the internal hydrostratigraphic subdivisions of the Edwards aquifer, consisting of three major and eight subsidiary hydrogeologic units. This geologic framework model is useful for visualizing the geologic structures within the Balcones fault zone and the interactions of en-echelon fault strands and flexed connecting fault-relay ramps. The model also aids in visualizing the lateral connections between hydrostratigraphic units of relatively high and low permeability across the fault strands. Introduction The Edwards aquifer is the principal source of water for municipal, agricultural, industrial, and military uses by nearly 1.5 million inhabitants of the greater San Antonio, Texas, region (Hovorka and others, 1996; Sharp and Banner, 1997). Discharges from the Edwards aquifer also support local recreation and tourism industries at Barton, Comal, and San Marcos Springs located northeast of San Antonio (Barker and others, 1994), as well as base flow for agricultural applications farther downstream. Average annual discharge from large springs (Comal, San Marcos, Hueco, and others) from the Edwards aquifer was about 365,000 acre-ft from 1934 to1998, with sizeable fluctuations related to annual variations in rainfall. Withdrawals through pumping have increased steadily from about 250,000 acre-ft during the 1960s to over 400,000 acre-ft in the 1990s in response to population growth, especially in the San Antonio metropolitan area (Slattery and Brown, 1999). Average annual recharge to the system (determined through stream gaging) has also varied considerably with annual rainfall fluctuations, but has been about 635,000 acre-ft over the last several decades.
Inferring Fault Frictional and Reservoir Hydraulic Properties From Injection-Induced Seismicity
NASA Astrophysics Data System (ADS)
Jagalur-Mohan, Jayanth; Jha, Birendra; Wang, Zheng; Juanes, Ruben; Marzouk, Youssef
2018-02-01
Characterizing the rheological properties of faults and the evolution of fault friction during seismic slip are fundamental problems in geology and seismology. Recent increases in the frequency of induced earthquakes have intensified the need for robust methods to estimate fault properties. Here we present a novel approach for estimation of aquifer and fault properties, which combines coupled multiphysics simulation of injection-induced seismicity with adaptive surrogate-based Bayesian inversion. In a synthetic 2-D model, we use aquifer pressure, ground displacements, and fault slip measurements during fluid injection to estimate the dynamic fault friction, the critical slip distance, and the aquifer permeability. Our forward model allows us to observe nonmonotonic evolutions of shear traction and slip on the fault resulting from the interplay of several physical mechanisms, including injection-induced aquifer expansion, stress transfer along the fault, and slip-induced stress relaxation. This interplay provides the basis for a successful joint inversion of induced seismicity, yielding well-informed Bayesian posterior distributions of dynamic friction and critical slip. We uncover an inverse relationship between dynamic friction and critical slip distance, which is in agreement with the small dynamic friction and large critical slip reported during seismicity on mature faults.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
NASA Astrophysics Data System (ADS)
Hsu, Y. J.; Yu, S. B.; Loveless, J. P.; Bacolcol, T.; Woessner, J.; Solidum, R., Jr.
2015-12-01
The Sunda plate converges obliquely with the Philippine Sea plate with a rate of ~100 mm/yr and results in the sinistral slip along the 1300 km-long Philippine fault. Using GPS data from 1998 to 2013 as well as a block modeling approach, we decompose the crustal motion into multiple rotating blocks and elastic deformation associated with fault slip at block boundaries. Our preferred model composed of 8 blocks, produces a mean residual velocity of 3.4 mm/yr at 93 GPS stations. Estimated long-term slip rates along the Manila subduction zone show a gradual southward decrease from 66 mm/yr at the northwest tip of Luzon to 60 mm/yr at the southern portion of the Manila Trench. We infer a low coupling fraction of 11% offshore northwest Luzon and a coupling fraction of 27% near the subduction of Scarborough Seamount. The accumulated strain along the Manila subduction zone at latitudes 15.5°~18.5°N could be balanced by earthquakes with composite magnitudes of Mw 8.7 and Mw 8.9 based on a recurrence interval of 500 years and 1000 years, respectively. Estimates of sinistral slip rates on the major splay faults of the Philippine fault system in central Luzon increase from east to west: sinistral slip rates are 2 mm/yr on the Dalton fault, 8 mm/yr on the Abra River fault, and 12 mm/yr on the Tubao fault. On the southern segment of the Philippine fault (Digdig fault), we infer left-lateral slip of ~20 mm/yr. The Vigan-Aggao fault in northwest Luzon exhibits significant reverse slip of up to 31 mm/yr, although deformation may be distributed across multiple offshore thrust faults. On the Northern Cordillera fault, we calculate left-lateral slip of ~7 mm/yr. Results of block modeling suggest that the majority of active faults in Luzon are fully locked to a depth of 15-20 km. Inferred moment magnitudes of inland large earthquakes in Luzon fall in the range of Mw 7.0-7.5 based on a recurrence interval of 100 years. Using the long-term plate convergence rate between the Sunda plate and Philippine Sea plate as well as seismic moment release rate, we calculate the moment budget for the entire Luzon plate boundary zone that could be balanced by earthquakes with a composite magnitude of ~Mw 9 based on recurrence intervals of 500-1000 years.
NASA Astrophysics Data System (ADS)
Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.
2009-12-01
High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in space, creating a vector. These constructed vectors were projected to intersection with the fault plane, defining statistically significant piercing points. The distance between the coupled set of piercing points, within the plane of the fault, is the cumulative displacement vector. To assess the variability of the modeled geomorphic surfaces, including surface roughness and nonlinearity, we generated a suite of displacement models by systematically incorporating larger areas of the model domain symmetrically about the fault. Preliminary results of 10 models yield an average cumulative displacement of 5.6 m (1 Std Dev = 0.31 m). As previously described, Tioga deglaciation melt water incised into the outwash terrace leaving terrace risers that were subsequently offset by the Polaris fault. Therefore, the age of the Tioga outwash terrace represents a maximum limiting age of the tectonic displacement. Using regional age constraints of 15 to 13 kya for the Tioga outwash terrace (Benson et al., 1990; Clark and Gillespie, 1997; James et al., 2002) and the above model results, we estimate a preliminary minimum fault slip rate of 0.40 ± 0.05 mm/yr for the Polaris type-section site.
Effect of a Near Fault on the Seismic Response of a Base-Isolated Structure with a Soft Storey
NASA Astrophysics Data System (ADS)
Athamnia, B.; Ounis, A.; Abdeddaim, M.
2017-12-01
This study focuses on the soft-storey behavior of RC structures with lead core rubber bearing (LRB) isolation systems under near and far-fault motions. Under near-fault ground motions, seismic isolation devices might perform poorly because of large isolator displacements caused by large velocity and displacement pulses associated with such strong motions. In this study, four different structural models have been designed to study the effect of soft-storey behavior under near-fault and far-fault motions. The seismic analysis for isolated reinforced concrete buildings is carried out using a nonlinear time history analysis method. Inter-story drifts, absolute acceleration, displacement, base shear forces, hysteretic loops and the distribution of plastic hinges are examined as a result of the analysis. These results show that the performance of a base isolated RC structure is more affected by increasing the height of a story under nearfault motion than under far-fault motion.
Constructing constitutive relationships for seismic and aseismic fault slip
Beeler, N.M.
2009-01-01
For the purpose of modeling natural fault slip, a useful result from an experimental fault mechanics study would be a physically-based constitutive relation that well characterizes all the relevant observations. This report describes an approach for constructing such equations. Where possible the construction intends to identify or, at least, attribute physical processes and contact scale physics to the observations such that the resulting relations can be extrapolated in conditions and scale between the laboratory and the Earth. The approach is developed as an alternative but is based on Ruina (1983) and is illustrated initially by constructing a couple of relations from that study. In addition, two example constitutive relationships are constructed; these describe laboratory observations not well-modeled by Ruina's equations: the unexpected shear-induced weakening of silica-rich rocks at high slip speed (Goldsby and Tullis, 2002) and fault strength in the brittle ductile transition zone (Shimamoto, 1986). The examples, provided as illustration, may also be useful for quantitative modeling.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
NASA Astrophysics Data System (ADS)
Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M.-P.; Barbot, S.; Tapponnier, P.; Peltzer, G.; Socquet, A.; Sun, J.
2016-04-01
Oblique convergence across Tibet leads to slip partitioning with the coexistence of strike-slip, normal and thrust motion on major fault systems. A key point is to understand and model how faults interact and accumulate strain at depth. Here, we extract ground deformation across the Haiyuan Fault restraining bend, at the northeastern boundary of the Tibetan plateau, from Envisat radar data spanning the 2001-2011 period. We show that the complexity of the surface displacement field can be explained by the partitioning of a uniform deep-seated convergence. Mountains and sand dunes in the study area make the radar data processing challenging and require the latest developments in processing procedures for Synthetic Aperture Radar interferometry. The processing strategy is based on a small baseline approach. Before unwrapping, we correct for atmospheric phase delays from global atmospheric models and digital elevation model errors. A series of filtering steps is applied to improve the signal-to-noise ratio across high ranges of the Tibetan plateau and the phase unwrapping capability across the fault, required for reliable estimate of fault movement. We then jointly invert our InSAR time-series together with published GPS displacements to test a proposed long-term slip-partitioning model between the Haiyuan and Gulang left-lateral Faults and the Qilian Shan thrusts. We explore the geometry of the fault system at depth and associated slip rates using a Bayesian approach and test the consistency of present-day geodetic surface displacements with a long-term tectonic model. We determine a uniform convergence rate of 10 [8.6-11.5] mm yr-1 with an N89 [81-97]°E across the whole fault system, with a variable partitioning west and east of a major extensional fault-jog (the Tianzhu pull-apart basin). Our 2-D model of two profiles perpendicular to the fault system gives a quantitative understanding of how crustal deformation is accommodated by the various branches of this thrust/strike-slip fault system and demonstrates how the geometry of the Haiyuan fault system controls the partitioning of the deep secular motion.
NASA Astrophysics Data System (ADS)
Chheda, T. D.; Nevitt, J. M.; Pollard, D. D.
2014-12-01
The formation of monoclinal right-lateral kink bands in Lake Edison granodiorite (central Sierra Nevada, CA) is investigated through field observations and mechanics based numerical modeling. Vertical faults act as weak surfaces within the granodiorite, and vertical granodiorite slabs bounded by closely-spaced faults curve into a kink. Leucocratic dikes are observed in association with kinking. Measurements were made on maps of Hilgard, Waterfall, Trail Fork, Kip Camp (Pollard and Segall, 1983b) and Bear Creek kink bands (Martel, 1998). Outcrop scale geometric parameters such as fault length andspacing, kink angle, and dike width are used to construct a representative geometry to be used in a finite element model. Three orders of fault were classified, length = 1.8, 7.2 and 28.8 m, and spacing = 0.3, 1.2 and 3.6 m, respectively. The model faults are oriented at 25° to the direction of shortening (horizontal most compressive stress), consistent with measurements of wing crack orientations in the field area. The model also includes a vertical leucocratic dike, oriented perpendicular to the faults and with material properties consistent with aplite. Curvature of the deformed faults across the kink band was used to compare the effects of material properties, strain, and fault and dike geometry. Model results indicate that the presence of the dike, which provides a mechanical heterogeneity, is critical to kinking in these rocks. Keeping properties of the model granodiorite constant, curvature increased with decrease in yield strength and Young's modulus of the dike. Curvature increased significantly as yield strength decreased from 95 to 90 MPa, and below this threshold value, limb rotation for the kink band was restricted to the dike. Changing Poisson's ratio had no significant effect. The addition of small faults between bounding faults, decreasing fault spacing or increasing dike width increases the curvature. Increasing friction along the faults decreases slip, so the shortening is accommodated by more kinking. Analysis of these parameters also gives us an insight concerning the kilometer-scale kink band in the Mount Abbot Quadrangle, where the Rosy Finch Shear Zone may provide the mechanical heterogeneity that is necessary to cause kinking.
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Astrophysics Data System (ADS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-11-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Adaptive model-based control systems and methods for controlling a gas turbine
NASA Technical Reports Server (NTRS)
Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)
2004-01-01
Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).
Experimental analysis of computer system dependability
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar, K.; Tang, Dong
1993-01-01
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).
NASA Astrophysics Data System (ADS)
Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.
2017-04-01
The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2014-05-01
In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.
A new model for the initiation, crustal architecture, and extinction of pull-apart basins
NASA Astrophysics Data System (ADS)
van Wijk, J.; Axen, G. J.; Abera, R.
2015-12-01
We present a new model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal numerical models of deformation, field observations, and fault theory, and answers many of the outstanding questions related to these rifts. In our model, geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step which results from early geometry of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement and the fault tips may propagate beyond the rift basin. Our model predicts that the sediment source areas may thus migrate over time. This implies that, although pull-apart basins lengthen over time, lengthening is accommodated by extension within the pull-apart basin, rather than formation of new faults outside of the rift zone. In this aspect pull-apart basins behave as narrow rifts: with increasing strike-slip the basins deepen but there is no significant younging outward. We explain why pull-apart basins do not go through previously proposed geometric evolutionary stages, which has not been documented in nature. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because strike-slip systems tend to straighten. The model predicts what the favorable step-dimensions are for the formation of such a fault system, and those for which a pull-apart basin may further develop into a short seafloor-spreading ridge. The model also shows that rift shoulder uplift is enhanced if the strike-slip rate is larger than the fault-propagation rate. Crustal compression then contributes to uplift of the rift flanks.
A Hybrid Stochastic-Neuro-Fuzzy Model-Based System for In-Flight Gas Turbine Engine Diagnostics
2001-04-05
Margin (ADM) and (ii) Fault Detection Margin (FDM). Key Words: ANFIS, Engine Health Monitoring , Gas Path Analysis, and Stochastic Analysis Adaptive Network...The paper illustrates the application of a hybrid Stochastic- Fuzzy -Inference Model-Based System (StoFIS) to fault diagnostics and prognostics for both...operational history monitored on-line by the engine health management (EHM) system. To capture the complex functional relationships between different
NASA Astrophysics Data System (ADS)
Li, C. H.; Wu, L. C.; Chan, P. C.; Lin, M. L.
2016-12-01
The National Highway No. 3 - Tianliao III Bridge is located in the southwestern Taiwan mudstone area and crosses the Chekualin fault. Since the bridge was opened to traffic, it has been repaired 11 times. To understand the interaction behavior between thrust faulting and the bridge, a discrete element method-based software program, PFC, was applied to conduct a numerical analysis. A 3D model for simulating the thrust faulting and bridge was established, as shown in Fig. 1. In this conceptual model, the length and width were 50 and 10 m, respectively. Part of the box bottom was moveable, simulating the displacement of the thrust fault. The overburden stratum had a height of 5 m with fault dip angles of 20° (Fig. 2). The bottom-up strata were mudstone, clay, and sand, separately. The uplift was 1 m, which was 20% of the stratum thickness. In accordance with the investigation, the position of the fault tip was set, depending on the fault zone, and the bridge deformation was observed (Fig. 3). By setting "Monitoring Balls" in the numerical model to analyzes bridge displacement, we determined that the bridge deck deflection increased as the uplift distance increased. Furthermore, the force caused by the loading of the bridge deck and fault dislocation was determined to cause a down deflection of the P1 and P2 bridge piers. Finally, the fault deflection trajectory of the P4 pier displayed the maximum displacement (Fig. 4). Similar behavior has been observed through numerical simulation as well as field monitoring data. Usage of the discrete element model (PFC3D) to simulate the deformation behavior between thrust faulting and the bridge provided feedback for the design and improved planning of the bridge.
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
Strike-Slip Fault Patterns on Europa: Obliquity or Polar Wander?
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Hurford, Terry A.; Manga, Michael
2011-01-01
Variations in diurnal tidal stress due to Europa's eccentric orbit have been considered as the driver of strike-slip motion along pre-existing faults, but obliquity and physical libration have not been taken into account. The first objective of this work is to examine the effects of obliquity on the predicted global pattern of fault slip directions based on a tidal-tectonic formation model. Our second objective is to test the hypothesis that incorporating obliquity can reconcile theory and observations without requiring polar wander, which was previously invoked to explain the mismatch found between the slip directions of 192 faults on Europa and the global pattern predicted using the eccentricity-only model. We compute predictions for individual, observed faults at their current latitude, longitude, and azimuth with four different tidal models: eccentricity only, eccentricity plus obliquity, eccentricity plus physical libration, and a combination of all three effects. We then determine whether longitude migration, presumably due to non-synchronous rotation, is indicated in observed faults by repeating the comparisons with and without obliquity, this time also allowing longitude translation. We find that a tidal model including an obliquity of 1.2?, along with longitude migration, can predict the slip directions of all observed features in the survey. However, all but four faults can be fit with only 1? of obliquity so the value we find may represent the maximum departure from a lower time-averaged obliquity value. Adding physical libration to the obliquity model improves the accuracy of predictions at the current locations of the faults, but fails to predict the slip directions of six faults and requires additional degrees of freedom. The obliquity model with longitude migration is therefore our preferred model. Although the polar wander interpretation cannot be ruled out from these results alone, the obliquity model accounts for all observations with a value consistent with theoretical expectations and cycloid modeling.
NASA Astrophysics Data System (ADS)
Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani
2018-02-01
As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
Modelling earthquake ruptures with dynamic off-fault damage
NASA Astrophysics Data System (ADS)
Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban
2017-04-01
Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for modelling earthquake ruptures. We then modelled earthquake ruptures allowing for coseismic off-fault damage with appropriate fracture nucleation and growth criteria. We studied the effect of different conditions such as rupture speed (sub-Rayleigh or supershear), the orientation of the initial maximum principal stress with respect to the fault and the magnitude of the initial stress (to mimic depth). The comparison between the sub-Rayleigh and supershear case shows that the coseismic off-fault damage is enhanced in the supershear case when compared with the sub-Rayleigh case. The orientation of the maximum principal stress also has significant difference such that the dynamic off-fault cracking is more likely to occur on the extensional side of the fault for high principal stress orientation. It is found that the coseismic off-fault damage reduces the rupture speed due to the dissipation of the energy by dynamic off-fault cracking generated in the vicinity of the rupture front. In terms of the ground motion amplitude spectra it is shown that the high-frequency radiation is enhanced by the coseismic off-fault damage though it is quickly attenuated. This is caused by the intricate superposition of the radiation generated by the off-fault damage and the perturbation of the rupture speed on the main fault.
Fault displacement hazard assessment for nuclear installations based on IAEA safety standards
NASA Astrophysics Data System (ADS)
Fukushima, Y.
2016-12-01
In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.
Reliability Assessment for Low-cost Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Freeman, Paul Michael
Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
Salehifar, Mehdi; Moreno-Equilaz, Manuel
2016-01-01
Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun
2017-01-01
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451
Pantea, Michael P.; Blome, Charles D.; Clark, Allan K.
2014-01-01
A three-dimensional model of the Camp Stanley Storage Activity area defines and illustrates the surface and subsurface hydrostratigraphic architecture of the military base and adjacent areas to the south and west using EarthVision software. The Camp Stanley model contains 11 hydrostratigraphic units in descending order: 1 model layer representing the Edwards aquifer; 1 model layer representing the upper Trinity aquifer; 6 model layers representing the informal hydrostratigraphic units that make up the upper part of the middle Trinity aquifer; and 3 model layers representing each, the Bexar, Cow Creek, and the top of the Hammett of the lower part of the middle Trinity aquifer. The Camp Stanley three-dimensional model includes 14 fault structures that generally trend northeast/southwest. The top of Hammett hydrostratigraphic unit was used to propagate and validate all fault structures and to confirm most of the drill-hole data. Differences between modeled and previously mapped surface geology reflect interpretation of fault relations at depth, fault relations to hydrostratigraphic contacts, and surface digital elevation model simplification to fit the scale of the model. In addition, changes based on recently obtained drill-hole data and field reconnaissance done during the construction of the model. The three-dimensional modeling process revealed previously undetected horst and graben structures in the northeastern and southern parts of the study area. This is atypical, as most faults in the area are en echelon that step down southeasterly to the Gulf Coast. The graben structures may increase the potential for controlling or altering local groundwater flow.
NASA Astrophysics Data System (ADS)
Zhang, W.; Jia, M. P.
2018-06-01
When incipient fault appear in the rolling bearing, the fault feature is too small and easily submerged in the strong background noise. In this paper, wavelet total variation denoising based on kurtosis (Kurt-WATV) is studied, which can extract the incipient fault feature of the rolling bearing more effectively. The proposed algorithm contains main steps: a) establish a sparse diagnosis model, b) represent periodic impulses based on the redundant wavelet dictionary, c) solve the joint optimization problem by alternating direction method of multipliers (ADMM), d) obtain the reconstructed signal using kurtosis value as criterion and then select optimal wavelet subbands. This paper uses overcomplete rational-dilation wavelet transform (ORDWT) as a dictionary, and adjusts the control parameters to achieve the concentration in the time-frequency plane. Incipient fault of rolling bearing is used as an example, and the result shows that the effectiveness and superiority of the proposed Kurt- WATV bearing fault diagnosis algorithm.
NASA Astrophysics Data System (ADS)
Anderson, M.; Bennett, R.; Matti, J.
2004-12-01
Existing geodetic, geomorphic, and geologic studies yield apparently conflicting estimates of fault displacement rates over the last 1.5 m.y. in the greater San Andreas fault (SAF) system of southern California. Do these differences reflect biases in one or more of the inference methods, or is fault displacement really temporally variable? Arguments have been presented for both cases. We investigate the plausibility of variable-rate fault models by combining basin deposit provenance, fault trenching, seismicity, gravity, and magnetic data sets from the San Bernardino basin. These data allow us to trace the path and broad timing of strike-slip fault displacements in buried basement rocks, which in turn allows us to test weather variable-fault rate models fit the displacement path and rate data through the basin. The San Bernardino basin lies between the San Jacinto fault (SJF) and the SAF. Isostatic gravity signatures show a 2 km deep graben centered directly over the modern strand of the SJF, whereas the basin is shallow and a-symmetric next to the SAF. This observation indicates that stresses necessary to create the basin have been centered on the SJF for most of the basin's history. Linear magnetic anomalies, used as geologic markers, are offset ˜25 km across the northernmost strands of the SJF, which matches offset estimations south of the basin. These offset anomalies indicate that the SJF and SAF are discrete fault systems that do not directly interact south of the San Gabriel Mountains, therefore spatial slip variability combined with sparse sampling cannot explain the conflicting rate data. Furthermore, analyses of basin deposits indicate that movement on the SJF began between 1.3 to1.5 Ma, yielding an over-all average displacement rate in the range of 17 to 19 mm/yr, which is higher than some shorter-term estimates based on geodesy and geomorphology. Average displacement rates over this same time period for the San Bernardino strand of the SAF, on the other hand, are inferred to be low, consistent with some recent short-term estimates based on geodesy, but in contrast with estimates based on geomorphology. We conclude that either published estimates for the short-term SJF displacement rate do not accurately reflect the full SJF rate, or that the SJF rate has decreased over time, with implications for rate changes on other faults in the region. We explore the latter explanation with models for time-variable displacement rate for the greater SAF system that satisfy all existing data.
Dynamic rupture models of earthquakes on the Bartlett Springs Fault, Northern California
Lozos, Julian C.; Harris, Ruth A.; Murray, Jessica R.; Lienkaemper, James J.
2015-01-01
The Bartlett Springs Fault (BSF), the easternmost branch of the northern San Andreas Fault system, creeps along much of its length. Geodetic data for the BSF are sparse, and surface creep rates are generally poorly constrained. The two existing geodetic slip rate inversions resolve at least one locked patch within the creeping zones. We use the 3-D finite element code FaultMod to conduct dynamic rupture models based on both geodetic inversions, in order to determine the ability of rupture to propagate into the creeping regions, as well as to assess possible magnitudes for BSF ruptures. For both sets of models, we find that the distribution of aseismic creep limits the extent of coseismic rupture, due to the contrast in frictional properties between the locked and creeping regions.
Earthquake and volcano clustering via stress transfer at Yucca Mountain, Nevada
Parsons, T.; Thompson, G.A.; Cogbill, A.H.
2006-01-01
The proposed national high-level nuclear waste repository at Yucca Mountain is close to Quaternary cinder cones and faults with Quaternary slip. Volcano eruption and earthquake frequencies are low, with indications of spatial and temporal clustering, making probabilistic assessments difficult. In an effort to identify the most likely intrusion sites, we based a three-dimensional finite-element model on the expectation that faulting and basalt intrusions are sensitive to the magnitude and orientation of the least principal stress in extensional terranes. We found that in the absence of fault slip, variation in overburden pressure caused a stress state that preferentially favored intrusions at Crater Flat. However, when we allowed central Yucca Mountain faults to slip in the model, we found that magmatic clustering was not favored at Crater Flat or in the central Yucca Mountain block. Instead, we calculated that the stress field was most encouraging to intrusions near fault terminations, consistent with the location of the most recent volcanism at Yucca Mountain, the Lathrop Wells cone. We found this linked fault and magmatic system to be mutually reinforcing in the model in that Lathrop Wells feeder dike inflation favored renewed fault slip. ?? 2006 Geological Society of America.
Tidal Fluctuations in a Deep Fault Extending Under the Santa Barbara Channel, California
NASA Astrophysics Data System (ADS)
Garven, G.; Stone, J.; Boles, J. R.
2013-12-01
Faults are known to strongly affect deep groundwater flow, and exert a profound control on petroleum accumulation, migration, and natural seafloor seepage from coastal reservoirs within the young sedimentary basins of southern California. In this paper we focus on major fault structure permeability and compressibility in the Santa Barbara Basin, where unique submarine and subsurface instrumentation provide the hydraulic characterization of faults in a structurally complex system. Subsurface geologic logs, geophysical logs, fluid P-T-X data, seafloor seep discharge patterns, fault mineralization petrology, isotopic data, fluid inclusions, and structural models help characterize the hydrogeological nature of faults in this seismically-active and young geologic terrain. Unique submarine gas flow data from a natural submarine seep area of the Santa Barbara Channel help constrain fault permeability k ~ 30 millidarcys for large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical models from classic hydrologic papers by Jacob-Ferris-Bredehoeft-van der Kamp-Wang can be used to extract large-scale fault permeability and compressibility parameters, based on tidal signal amplitude attenuation and phase shift at depth. For the South Ellwood Fault, we estimate k ~ 38 millidarcys (hydraulic conductivity K~ 3.6E-07 m/s) and specific storage coefficient Ss ~ 5.5E-08 m-1. The tidal-derived hydraulic properties also suggest a low effective porosity for the fault zone, n ~ 1 to 3%. Results of forward modeling with 2-D finite element models illustrate significant lateral propagation of the tidal signal into highly-permeable Monterey Formation. The results have important practical implications for fault characterization, petroleum migration, structural diagenesis, and carbon sequestration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pashin, J.C.; Raymond, D.E.; Rindsberg, A.K.
1997-08-01
Gilbertown Field is the oldest oil field in Alabama and produces oil from chalk of the Upper Cretaceous Selma Group and from sandstone of the Eutaw Formation along the southern margin of the Gilbertown fault system. Most of the field has been in primary recovery since establishment, but production has declined to marginally economic levels. This investigation applies advanced geologic concepts designed to aid implementation of improved recovery programs. The Gilbertown fault system is detached at the base of Jurassic salt. The fault system began forming as a half graben and evolved in to a full graben by the Latemore » Cretaceous. Conventional trapping mechanisms are effective in Eutaw sandstone, whereas oil in Selma chalk is trapped in faults and fault-related fractures. Burial modeling establishes that the subsidence history of the Gilbertown area is typical of extensional basins and includes a major component of sediment loading and compaction. Surface mapping and fracture analysis indicate that faults offset strata as young as Miocene and that joints may be related to regional uplift postdating fault movement. Preliminary balanced structural models of the Gilbertown fault system indicate that synsedimentary growth factors need to be incorporated into the basic equations of area balance to model strain and predict fractures in Selma and Eutaw reservoirs.« less
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence.
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-03-09
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-01-01
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults. PMID:28282936
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
Observer-Based Adaptive Fault-Tolerant Tracking Control of Nonlinear Nonstrict-Feedback Systems.
Wu, Chengwei; Liu, Jianxing; Xiong, Yongyang; Wu, Ligang
2017-06-28
This paper studies an output-based adaptive fault-tolerant control problem for nonlinear systems with nonstrict-feedback form. Neural networks are utilized to identify the unknown nonlinear characteristics in the system. An observer and a general fault model are constructed to estimate the unavailable states and describe the fault, respectively. Adaptive parameters are constructed to overcome the difficulties in the design process for nonstrict-feedback systems. Meanwhile, dynamic surface control technique is introduced to avoid the problem of ''explosion of complexity''. Furthermore, based on adaptive backstepping control method, an output-based adaptive neural tracking control strategy is developed for the considered system against actuator fault, which can ensure that all the signals in the resulting closed-loop system are bounded, and the system output signal can be regulated to follow the response of the given reference signal with a small error. Finally, the simulation results are provided to validate the effectiveness of the control strategy proposed in this paper.
Robust dead reckoning system for mobile robots based on particle filter and raw range scan.
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-09-04
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method.
Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Liu, Qiang; Chai, Tianyou; Wang, Hong; Qin, Si-Zhao Joe
2011-12-01
The continuous annealing process line (CAPL) of cold rolling is an important unit to improve the mechanical properties of steel strips in steel making. In continuous annealing processes, strip tension is an important factor, which indicates whether the line operates steadily. Abnormal tension profile distribution along the production line can lead to strip break and roll slippage. Therefore, it is essential to estimate the whole tension profile in order to prevent the occurrence of faults. However, in real annealing processes, only a limited number of strip tension sensors are installed along the machine direction. Since the effects of strip temperature, gas flow, bearing friction, strip inertia, and roll eccentricity can lead to nonlinear tension dynamics, it is difficult to apply the first-principles induced model to estimate the tension profile distribution. In this paper, a novel data-based hybrid tension estimation and fault diagnosis method is proposed to estimate the unmeasured tension between two neighboring rolls. The main model is established by an observer-based method using a limited number of measured tensions, speeds, and currents of each roll, where the tension error compensation model is designed by applying neural networks principal component regression. The corresponding tension fault diagnosis method is designed using the estimated tensions. Finally, the proposed tension estimation and fault diagnosis method was applied to a real CAPL in a steel-making company, demonstrating the effectiveness of the proposed method.
Hydrostructural maps of the Death Valley regional flow system, Nevada and California
Potter, C.J.; Sweetkind, D.S.; Dickerson, R.P.; Killgore, M.L.
2002-01-01
The locations of principal faults and structural zones that may influence ground-water flow were compiled in support of a three-dimensional ground-water model for the Death Valley regional flow system (DVRFS), which covers 80,000 square km in southwestern Nevada and southeastern California. Faults include Neogene extensional and strike-slip faults and pre-Tertiary thrust faults. Emphasis was given to characteristics of faults and deformed zones that may have a high potential for influencing hydraulic conductivity. These include: (1) faulting that results in the juxtaposition of stratigraphic units with contrasting hydrologic properties, which may cause ground-water discharge and other perturbations in the flow system; (2) special physical characteristics of the fault zones, such as brecciation and fracturing, that may cause specific parts of the zone to act either as conduits or as barriers to fluid flow; (3) the presence of a variety of lithologies whose physical and deformational characteristics may serve to impede or enhance flow in fault zones; (4) orientation of a fault with respect to the present-day stress field, possibly influencing hydraulic conductivity along the fault zone; and (5) faults that have been active in late Pleistocene or Holocene time and areas of contemporary seismicity, which may be associated with enhanced permeabilities. The faults shown on maps A and B are largely from Workman and others (in press), and fit one or more of the following criteria: (1) faults that are more than 10 km in map length; (2) faults with more than 500 m of displacement; and (3) faults in sets that define a significant structural fabric that characterizes a particular domain of the DVRFS. The following fault types are shown: Neogene normal, Neogene strike-slip, Neogene low-angle normal, pre-Tertiary thrust, and structural boundaries of Miocene calderas. We have highlighted faults that have late Pleistocene to Holocene displacement (Piety, 1996). Areas of thick Neogene basin-fill deposits (thicknesses 1-2 km, 2-3 km, and >3 km) are shown on map A, based on gravity anomalies and depth-to-basement modeling by Blakely and others (1999). We have interpreted the positions of faults in the subsurface, generally following the interpretations of Blakely and others (1999). Where geophysical constraints are not present, the faults beneath late Tertiary and Quaternary cover have been extended based on geologic reasoning. Nearly all of these concealed faults are shown with continuous solid lines on maps A and B, in order to provide continuous structures for incorporation into the hydrogeologic framework model (HFM). Map A also shows the potentiometric surface, regional springs (25-35 degrees Celsius, D'Agnese and others, 1997), and cold springs (Turner and others, 1996).
NASA Astrophysics Data System (ADS)
Argyropoulou, Evangelia
2015-04-01
The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.
DEPEND: A simulation-based environment for system level dependability analysis
NASA Technical Reports Server (NTRS)
Goswami, Kumar; Iyer, Ravishankar K.
1992-01-01
The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.
Model-Based Assurance Case+ (MBAC+): Tutorial on Modeling Radiation Hardness Assurance Activities
NASA Technical Reports Server (NTRS)
Austin, Rebekah; Label, Ken A.; Sampson, Mike J.; Evans, John; Witulski, Art; Sierawski, Brian; Karsai, Gabor; Mahadevan, Nag; Schrimpf, Ron; Reed, Robert A.
2017-01-01
This presentation will cover why modeling is useful for radiation hardness assurance cases, and also provide information on Model-Based Assurance Case+ (MBAC+), NASAs Reliability Maintainability Template, and Fault Propagation Modeling.
Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system
NASA Astrophysics Data System (ADS)
Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook
2014-09-01
Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.
NASA Astrophysics Data System (ADS)
Lin, Y. K.; Ke, M. C.; Ke, S. S.
2016-12-01
An active fault is commonly considered to be active if they have moved one or more times in the last 10,000 years and likely to have another earthquake sometime in the future. The relationship between the fault reactivation and the surface deformation after the Chi-Chi earthquake (M=7.2) in 1999 has been concerned up to now. According to the investigations of well-known disastrous earthquakes in recent years, indicated that surface deformation is controlled by the 3D fault geometric shape. Because the surface deformation may cause dangerous damage to critical infrastructures, buildings, roads, power, water and gas lines etc. Therefore it's very important to make pre-disaster risk assessment via the 3D active fault model to decrease serious economic losses, people injuries and deaths caused by large earthquake. The approaches to build up the 3D active fault model can be categorized as (1) field investigation (2) digitized profile data and (3) build the 3D modeling. In this research, we tracked the location of the fault scarp in the field first, then combined the seismic profiles (had been balanced) and historical earthquake data to build the underground fault plane model by using SKUA-GOCAD program. Finally compared the results come from trishear model (written by Richard W. Allmendinger, 2012) and PFC-3D program (Itasca) and got the calculated range of the deformation area. By analysis of the surface deformation area made from Hsin-Chu Fault, we concluded the result the damage zone is approaching 68 286m, the magnitude is 6.43, the offset is 0.6m. base on that to estimate the population casualties, building damage by the M=6.43 earthquake in Hsin-Chu area, Taiwan. In the future, in order to be applied accurately on earthquake disaster prevention, we need to consider further the groundwater effect and the soil structure interaction inducing by faulting.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Toda, S.; Ando, R.; Yamashina, T.; Inoue, H.; Sunarjo
2010-04-01
On 2007 March 6, an earthquake doublet occurred along the Sumatran fault, Indonesia. The epicentres were located near Padang Panjang, central Sumatra, Indonesia. The first earthquake, with a moment magnitude (Mw) of 6.4, occurred at 03:49 UTC and was followed two hours later (05:49 UTC) by an earthquake of similar size (Mw = 6.3). We studied the earthquake doublet by a waveform inversion analysis using data from a broadband seismograph network in Indonesia (JISNET). The focal mechanisms of the two earthquakes indicate almost identical right-lateral strike-slip faults, consistent with the geometry of the Sumatran fault. Both earthquakes nucleated below the northern end of Lake Singkarak, which is in a pull-apart basin between the Sumani and Sianok segments of the Sumatran fault system, but the earthquakes ruptured different fault segments. The first earthquake occurred along the southern Sumani segment and its rupture propagated southeastward, whereas the second one ruptured the northern Sianok segment northwestward. Along these fault segments, earthquake doublets, in which the two adjacent fault segments rupture one after the other, have occurred repeatedly. We investigated the state of stress at a segment boundary of a fault system based on the Coulomb stress changes. The stress on faults increases during interseismic periods and is released by faulting. At a segment boundary, on the other hand, the stress increases both interseismically and coseismically, and may not be released unless new fractures are created. Accordingly, ruptures may tend to initiate at a pull-apart basin. When an earthquake occurs on one of the fault segments, the stress increases coseismically around the basin. The stress changes caused by that earthquake may trigger a rupture on the other segment after a short time interval. We also examined the mechanism of the delayed rupture based on a theory of a fluid-saturated poroelastic medium and dynamic rupture simulations incorporating a rheological velocity hardening effect. These models of the delayed rupture can qualitatively explain the observations, but further studies, especially based on the rheological effect, are required for quantitative studies.
Fault-related fold styles and progressions in fold-thrust belts: Insights from sandbox modeling
NASA Astrophysics Data System (ADS)
Yan, Dan-Ping; Xu, Yan-Bo; Dong, Zhou-Bin; Qiu, Liang; Zhang, Sen; Wells, Michael
2016-03-01
Fault-related folds of variable structural styles and assemblages commonly coexist in orogenic belts with competent-incompetent interlayered sequences. Despite their commonality, the kinematic evolution of these structural styles and assemblages are often loosely constrained because multiple solutions exist in their structural progression during tectonic restoration. We use a sandbox modeling instrument with a particle image velocimetry monitor to test four designed sandbox models with multilayer competent-incompetent materials. Test results reveal that decollement folds initiate along selected incompetent layers with decreasing velocity difference and constant vorticity difference between the hanging wall and footwall of the initial fault tips. The decollement folds are progressively converted to fault-propagation folds and fault-bend folds through development of fault ramps breaking across competent layers and are followed by propagation into fault flats within an upper incompetent layer. Thick-skinned thrust is produced by initiating a decollement fault within the metamorphic basement. Progressive thrusting and uplifting of the thick-skinned thrust trigger initiation of the uppermost incompetent decollement with formation of a decollement fold and subsequent converting to fault-propagation and fault-bend folds, which combine together to form imbricate thrust. Breakouts at the base of the early formed fault ramps along the lowest incompetent layers, which may correspond to basement-cover contacts, domes the upmost decollement and imbricate thrusts to form passive roof duplexes and constitute the thin-skinned thrust belt. Structural styles and assemblages in each of tectonic stages are similar to that in the representative orogenic belts in the South China, Southern Appalachians, and Alpine orogenic belts.
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Aircraft applications of fault detection and isolation techniques
NASA Astrophysics Data System (ADS)
Marcos Esteban, Andres
In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ismullah M, Muh. Fawzy, E-mail: mallaniung@gmail.com; Lantu,; Aswad, Sabrianto
Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo – Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake data which consist of: the origin time, the epicenter coordinate, the depth, the magnitude and the fault parameter (strike, dip and slip). Themore » result of research shows that there are a lot of active structures as a reason of the earthquake in Sulawesi. The active structures are Walannae Fault, Lawanopo Fault, Matano Fault, Palu – Koro Fault, Batui Fault and Moluccas Sea Double Subduction. The focal mechanism also shows that Walannae Fault, Batui Fault and Moluccas Sea Double Subduction are kind of reverse fault. While Lawanopo Fault, Matano Fault and Palu – Koro Fault are kind of strike slip fault.« less
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
Caprock Integrity during Hydrocarbon Production and CO2 Injection in the Goldeneye Reservoir
NASA Astrophysics Data System (ADS)
Salimzadeh, Saeed; Paluszny, Adriana; Zimmerman, Robert
2016-04-01
Carbon Capture and Storage (CCS) is a key technology for addressing climate change and maintaining security of energy supplies, while potentially offering important economic benefits. UK offshore, depleted hydrocarbon reservoirs have the potential capacity to store significant quantities of carbon dioxide, produced during power generation from fossil fuels. The Goldeneye depleted gas condensate field, located offshore in the UK North Sea at a depth of ~ 2600 m, is a candidate for the storage of at least 10 million tons of CO2. In this research, a fully coupled, full-scale model (50×20×8 km), based on the Goldeneye reservoir, is built and used for hydro-carbon production and CO2 injection simulations. The model accounts for fluid flow, heat transfer, and deformation of the fractured reservoir. Flow through fractures is defined as two-dimensional laminar flow within the three-dimensional poroelastic medium. The local thermal non-equilibrium between injected CO2 and host reservoir has been considered with convective (conduction and advection) heat transfer. The numerical model has been developed using standard finite element method with Galerkin spatial discretisation, and finite difference temporal discretisation. The geomechanical model has been implemented into the object-oriented Imperial College Geomechanics Toolkit, in close interaction with the Complex Systems Modelling Platform (CSMP), and validated with several benchmark examples. Fifteen major faults are mapped from the Goldeneye field into the model. Modal stress intensity factors, for the three modes of fracture opening during hydrocarbon production and CO2 injection phases, are computed at the tips of the faults by computing the I-Integral over a virtual disk. Contact stresses -normal and shear- on the fault surfaces are iteratively computed using a gap-based augmented Lagrangian-Uzawa method. Results show fault activation during the production phase that may affect the fault's hydraulic conductivity and its connection to the reservoir rocks. The direction of growth is downward during production and it is expected to be upward during injection. Elevated fluid pressures inside faults during CO2 injection may further facilitate fault activation by reducing normal effective stresses. Activated faults can act as permeable conduits and potentially jeopardise caprock integrity for CO2 storage purposes.
Fault Analysis and Detection in Microgrids with High PV Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham
In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less
Strike-slip faulting in the Inner California Borderlands, offshore Southern California.
NASA Astrophysics Data System (ADS)
Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.
2015-12-01
In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.
Monitoring Wind Turbine Loading Using Power Converter Signals
NASA Astrophysics Data System (ADS)
Rieg, C. A.; Smith, C. J.; Crabtree, C. J.
2016-09-01
The ability to detect faults and predict loads on a wind turbine drivetrain's mechanical components cost-effectively is critical to making the cost of wind energy competitive. In order to investigate whether this is possible using the readily available power converter current signals, an existing permanent magnet synchronous generator based wind energy conversion system computer model was modified to include a grid-side converter (GSC) for an improved converter model and a gearbox. The GSC maintains a constant DC link voltage via vector control. The gearbox was modelled as a 3-mass model to allow faults to be included. Gusts and gearbox faults were introduced to investigate the ability of the machine side converter (MSC) current (I q) to detect and quantify loads on the mechanical components. In this model, gearbox faults were not detectable in the I q signal due to shaft stiffness and damping interaction. However, a model that predicts the load change on mechanical wind turbine components using I q was developed and verified using synthetic and real wind data.
Improving fault image by determination of optimum seismic survey parameters using ray-based modeling
NASA Astrophysics Data System (ADS)
Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali
2018-06-01
In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA
NASA Astrophysics Data System (ADS)
Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.
2017-12-01
Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA
Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models
NASA Astrophysics Data System (ADS)
Biemiller, J.; Lavier, L. L.; Wallace, L.
2016-12-01
Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.
On-board fault management for autonomous spacecraft
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne
1991-01-01
The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.
NASA Astrophysics Data System (ADS)
Mirkamali, M. S.; Keshavarz FK, N.; Bakhtiari, M. R.
2013-02-01
Faults, as main pathways for fluids, play a critical role in creating regions of high porosity and permeability, in cutting cap rock and in the migration of hydrocarbons into the reservoir. Therefore, accurate identification of fault zones is very important in maximizing production from petroleum traps. Image processing and modern visualization techniques are provided for better mapping of objects of interest. In this study, the application of fault mapping in the identification of fault zones within the Mishan and Aghajari formations above the Guri base unconformity surface in the eastern part of Persian Gulf is investigated. Seismic single- and multi-trace attribute analyses are employed separately to determine faults in a vertical section, but different kinds of geological objects cannot be identified using individual attributes only. A mapping model is utilized to improve the identification of the faults, giving more accurate results. This method is based on combinations of all individual relevant attributes using a neural network system to create combined attributes, which gives an optimal view of the object of interest. Firstly, a set of relevant attributes were separately calculated on the vertical section. Then, at interpreted positions, some example training locations were manually selected in each fault and non-fault class by an interpreter. A neural network was trained on combinations of the attributes extracted at the example training locations to generate an optimized fault cube. Finally, the results of the fault and nonfault probability cube were estimated, which the neural network applied to the entire data set. The fault probability cube was obtained with higher mapping accuracy and greater contrast, and with fewer disturbances in comparison with individual attributes. The computed results of this study can support better understanding of the data, providing fault zone mapping with reliable results.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-01-01
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-04-11
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.
NASA Astrophysics Data System (ADS)
De Cristofaro, J. L.; Polet, J.
2017-12-01
The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques derived from imagery acquired by an unmanned aerial vehicle and ground control points measured with realtime kinematic GPS receivers. This terrain model will be combined with subsurface geophysical data to form a comprehensive model of the subsurface.
NASA Astrophysics Data System (ADS)
Aldiss, Don; Haslam, Richard
2013-04-01
In parts of London, faulting introduces lateral heterogeneity to the local ground conditions, especially where construction works intercept the Palaeogene Lambeth Group. This brings difficulties to the compilation of a ground model that is fully consistent with the ground investigation data, and so to the design and construction of engineering works. However, because bedrock in the London area is rather uniform at outcrop, and is widely covered by Quaternary deposits, few faults are shown on the geological maps of the area. This paper discusses a successful resolution of this problem at a site in east central London, where tunnels for a new underground railway station are planned. A 3D geological model was used to provide an understanding of the local geological structure, in faulted Lambeth Group strata, that had not been possible by other commonly-used methods. This model includes seven previously unrecognised faults, with downthrows ranging from about 1 m to about 12 m. The model was constructed in the GSI3D geological modelling software using about 145 borehole records, including many legacy records, in an area of 850 m by 500 m. The basis of a GSI3D 3D geological model is a network of 2D cross-sections drawn by a geologist, generally connecting borehole positions (where the borehole records define the level of the geological units that are present), and outcrop and subcrop lines for those units (where shown by a geological map). When the lines tracing the base of each geological unit within the intersecting cross-sections are complete and mutually consistent, the software is used to generate TIN surfaces between those lines, so creating a 3D geological model. Even where a geological model is constructed as if no faults were present, changes in apparent dip between two data points within a single cross-section can indicate that a fault is present in that segment of the cross-section. If displacements of similar size with the same polarity are found in a series of adjacent cross-sections, the presence of a fault can be substantiated. If it is assumed that the fault is planar and vertical, then the pairs of constraining data points in each cross-section form a two-dimensional envelope within which the surface trace of the fault must lie. Generally, the broader the area of the model, the longer the envelope defined by the pairs of boreholes is, resulting in better constraint of the fault zone width and azimuth. Repetition or omission of the local stratigraphy in the constraining boreholes can demonstrate reverse or normal dip-slip motion. Even if this is not possible, borehole intercepts at the base of the youngest bedrock unit or at the top of the oldest bedrock unit can constrain the minimum angle of dip of the fault plane. Assessment of the maximum angle of dip requires intrusive investigation. This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an NERC copyright. This license does not conflict with the regulations of the Crown Copyright.
Subduction Initiation under Unfavorable Conditions and New Fault Formation
NASA Astrophysics Data System (ADS)
Mao, X.; Gurnis, M.; May, D.
2017-12-01
How subduction initiates with unfavorable dipping lithospheric heterogeneities is an important and rarely studied topic. We build a geodynamic model starting with a vertical weak zone for the Puysegur incipient subduction zone (PISZ). A true free surface is tracked in pTatin3D, based on the Arbitrary Lagrangian Eulerian (ALE) finite element method, and is used to follow the dynamic mantle-surface interaction and topographic evolution. A simplified surface process, based on linear topography diffusion, is implemented. Density and free water content for different phase assemblages are gained by referring to precalculated 4D (temperature, pressure, rock type and total water content) phase maps using Perplex. Darcy's law is used to migrate free water, and a linear water weakening is applied to the mantle material. A new visco-elastic formulation called Elastic Viscous Stress Splitting (EVSS) method is also included. Our predictions fit the morphology of the Puysegur Trench and Ridge and the deformation history on the overriding plate. We show a new thrust fault forms and evolves into a smooth subduction interface, and the preexisting weak zone becomes a vertical fault inboard of the thrust fault during subduction initiation, which explains the two-fault system at PISZ. Our model suggests that the PISZ may not yet be self-sustaining. We propose that the Snares Trough is caused by plate coupling differences between shallower and deeper parts, the tectonic sliver between two faults experiences strong rotation, and low density materials accumulate beneath the Snares trough. Extended models show that with favorable dipping heterogeneities, no new fault forms, and subduction initiates with smaller resisting forces.
Study of fault tolerant software technology for dynamic systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Zacharias, G. L.
1985-01-01
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
NASA Astrophysics Data System (ADS)
Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.
2017-11-01
The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr < 2 km/s) propagating rupture from south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Tectonic history of the northern Nabitah fault zone, Arabian Shield, Kingdom of Saudi Arabia
Quick, J.E.; Bosch, Paul S.
1990-01-01
Based on the presence of similar lithologies, similar structure, and analogous tectonic setting, the Mother Lode District in California is reviewed as a model for gold occurrences near the Nabitah fault zone in this report.
NASA Astrophysics Data System (ADS)
Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian
2017-09-01
Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.
Haul truck tire dynamics due to tire condition
NASA Astrophysics Data System (ADS)
Vaghar Anzabi, R.; Nobes, D. S.; Lipsett, M. G.
2012-05-01
Pneumatic tires are costly components on large off-road haul trucks used in surface mining operations. Tires are prone to damage during operation, and these events can lead to injuries to personnel, loss of equipment, and reduced productivity. Damage rates have significant variability, due to operating conditions and a range of tire fault modes. Currently, monitoring of tire condition is done by physical inspection; and the mean time between inspections is often longer than the mean time between incipient failure and functional failure of the tire. Options for new condition monitoring methods include off-board thermal imaging and camera-based optical methods for detecting abnormal deformation and surface features, as well as on-board sensors to detect tire faults during vehicle operation. Physics-based modeling of tire dynamics can provide a good understanding of the tire behavior, and give insight into observability requirements for improved monitoring systems. This paper describes a model to simulate the dynamics of haul truck tires when a fault is present to determine the effects of physical parameter changes that relate to faults. To simulate the dynamics, a lumped mass 'quarter-vehicle' model has been used to determine the response of the system to a road profile when a failure changes the original properties of the tire. The result is a model of tire vertical displacement that can be used to detect a fault, which will be tested under field conditions in time-varying conditions.
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
Vibration modelling and verifications for whole aero-engine
NASA Astrophysics Data System (ADS)
Chen, G.
2015-08-01
In this study, a new rotor-ball-bearing-casing coupling dynamic model for a practical aero-engine is established. In the coupling system, the rotor and casing systems are modelled using the finite element method, support systems are modelled as lumped parameter models, nonlinear factors of ball bearings and faults are included, and four types of supports and connection models are defined to model the complex rotor-support-casing coupling system of the aero-engine. A new numerical integral method that combines the Newmark-β method and the improved Newmark-β method (Zhai method) is used to obtain the system responses. Finally, the new model is verified in three ways: (1) modal experiment based on rotor-ball bearing rig, (2) modal experiment based on rotor-ball-bearing-casing rig, and (3) fault simulations for a certain type of missile turbofan aero-engine vibration. The results show that the proposed model can not only simulate the natural vibration characteristics of the whole aero-engine but also effectively perform nonlinear dynamic simulations of a whole aero-engine with faults.
Measurement and analysis of operating system fault tolerance
NASA Technical Reports Server (NTRS)
Lee, I.; Tang, D.; Iyer, R. K.
1992-01-01
This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.
Sensor placement for diagnosability in space-borne systems - A model-based reasoning approach
NASA Technical Reports Server (NTRS)
Chien, Steve; Doyle, Richard; Rouquette, Nicolas
1992-01-01
This paper presents an approach to evaluating sensor placements on the basis of how well they are able to discriminate between a given fault and normal operating modes and/or other fault modes. In this approach, a model of the system in both normal operations and fault modes is used to evaluate possible sensor placements upon the basis of three criteria. Discriminability measures how much of a divergence in expected sensor readings the two system modes can be expected to produce. Accuracy measures confidence in the particular model predictions. Timeliness measures how long after the fault occurrence the expected divergence will take place. These three metrics then can be used to form a recommendation for a sensor placement. This paper describes how these measures can be computed and illustrated these methods with a brief example.
Wastewater injection and slip triggering: Results from a 3D coupled reservoir/rate-and-state model
NASA Astrophysics Data System (ADS)
Babazadeh, M.; Olson, J. E.; Schultz, R.
2017-12-01
Seismicity induced by fluid injection is controlled by parameters related to injection conditions, reservoir properties, and fault frictional behavior. We present results from a combined model that brings together injection physics, reservoir dynamics, and fault physics to better explain the primary controls on induced seismicity. We created a 3D fluid flow simulator using the embedded discrete fracture technique and then coupled it with a 3D displacement discontinuity model that uses rate and state friction to model slip events. The model is composed of three layers, including the top-seal, the injection reservoir, and the basement. Permeability is anisotropic (vertical vs horizontal) and along with porosity varies by layer. Injection control can be either rate or pressure. Fault properties include size, 2D permeability, and frictional properties. Several suites of simulations were run to evaluate the relative importance of each of the factors from all three parameter groups. We find that the injection parameters interact with the reservoir parameters in the context of the fault physics and these relations change for different reservoir and fault characteristics, leading to the need to examine the injection parameters only within the context of a particular faulted reservoir. For a reservoir with no flow boundaries, low permeability (5 md), and a fault with high fault-parallel permeability and critical stress, injection rate exerts the strongest control on magnitude and frequency of earthquakes. However, for a higher permeability reservoir (80 md), injection volume becomes the more important factor. Fault permeability structure is a key factor in inducing earthquakes in basement rocks below the injection reservoir. The initial failure state of the fault, which is challenging to assess, can have a big effect on the size and timing of events. For a fault 2 MPa below critical state, we were able to induce a slip event, but it occurred late in the injection history and was limited to a subset of the fault extent. A case starting at critical stress resulted in a rupture that propagated throughout the entire physical extent of the fault generated a larger magnitude earthquake. This physics-based model can contribute to assessing the risk associated with injection activities and providing guidelines for hazard mitigation.
Transfer zones in listric normal fault systems
NASA Astrophysics Data System (ADS)
Bose, Shamik
Listric normal faults are common in passive margin settings where sedimentary units are detached above weaker lithological units, such as evaporites or are driven by basal structural and stratigraphic discontinuities. The geometries and styles of faulting vary with the types of detachment and form landward and basinward dipping fault systems. Complex transfer zones therefore develop along the terminations of adjacent faults where deformation is accommodated by secondary faults, often below seismic resolution. The rollover geometry and secondary faults within the hanging wall of the major faults also vary with the styles of faulting and contribute to the complexity of the transfer zones. This study tries to understand the controlling factors for the formation of the different styles of listric normal faults and the different transfer zones formed within them, by using analog clay experimental models. Detailed analyses with respect to fault orientation, density and connectivity have been performed on the experiments in order to gather insights on the structural controls and the resulting geometries. A new high resolution 3D laser scanning technology has been introduced to scan the surfaces of the clay experiments for accurate measurements and 3D visualizations. Numerous examples from the Gulf of Mexico have been included to demonstrate and geometrically compare the observations in experiments and real structures. A salt cored convergent transfer zone from the South Timbalier Block 54, offshore Louisiana has been analyzed in detail to understand the evolutionary history of the region, which helps in deciphering the kinematic growth of similar structures in the Gulf of Mexico. The dissertation is divided into three chapters, written in a journal article format, that deal with three different aspects in understanding the listric normal fault systems and the transfer zones so formed. The first chapter involves clay experimental models to understand the fault patterns in divergent and convergent transfer zones. Flat base plate setups have been used to build different configurations that would lead to approaching, normal offset and overlapping faults geometries. The results have been analyzed with respect to fault orientation, density, connectivity and 3D geometry from photographs taken from the three free surfaces and laser scans of the top surface of the clay cake respectively. The second chapter looks into the 3D structural analysis of the South Timbalier Block 54, offshore Louisiana in the Gulf of Mexico with the help of a 3D seismic dataset and associated well tops and velocity data donated by ExxonMobil Corporation. This study involves seismic interpretation techniques, velocity modeling, cross section restoration of a series of seismic lines and 3D subsurface modeling using depth converted seismic horizons, well tops and balanced cross sections. The third chapter deals with the clay experiments of listric normal fault systems and tries to understand the controls on geometries of fault systems with and without a ductile substrate. Sloping flat base plate setups have been used and silicone fluid underlain below the clay cake has been considered as an analog for salt. The experimental configurations have been varied with respect to three factors viz. the direction of slope with respect to extension, the termination of silicone polymer with respect to the basal discontinuities and overlap of the base plates. The analyses for the experiments have again been performed from photographs and 3D laser scans of the clay surface.
Online Sensor Fault Detection Based on an Improved Strong Tracking Filter
Wang, Lijuan; Wu, Lifeng; Guan, Yong; Wang, Guohui
2015-01-01
We propose a method for online sensor fault detection that is based on the evolving Strong Tracking Filter (STCKF). The cubature rule is used to estimate states to improve the accuracy of making estimates in a nonlinear case. A residual is the difference in value between an estimated value and the true value. A residual will be regarded as a signal that includes fault information. The threshold is set at a reasonable level, and will be compared with residuals to determine whether or not the sensor is faulty. The proposed method requires only a nominal plant model and uses STCKF to estimate the original state vector. The effectiveness of the algorithm is verified by simulation on a drum-boiler model. PMID:25690553
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Effect of fault roughness on aftershock distribution and post co-seismic strain accumulation.
NASA Astrophysics Data System (ADS)
Aslam, K.; Daub, E. G.
2017-12-01
We perform physics-based simulations of earthquake rupture propagation on geometrically complex strike-slip faults. We consider many different realization of the fault roughness and obtain heterogeneous stress fields by performing dynamic rupture simulation of large earthquakes. We calculate the Coulomb failure function (CFF) for all these realizations so that we can quantify zones of stress increase/shadows surrounding the main fault and compare our results to seismic catalogs. To do this comparison, we use relocated earthquake catalogs from Northern and Southern California. We specify the range of fault roughness parameters based on past observational studies. The Hurst exponent (H) varies in range from 0.5 to 1 and RMS height to wavelength ratio ( RMS deviation of a fault profile from planarity) has values between 10-2 to 10-3. For any realization of fault roughness, the Probability density function (PDF) values relative to the mean CFF change show a wider spread near the fault and this spread squeezes into a narrow band as we move away from fault. For lower value of RMS ratio ( 10-3), we see bigger zones of stress change near the hypocenter and for higher value of RMS ratio ( 10-2), we see alternate zones of stress increase/decrease surrounding the fault to have comparable lengths. We also couple short-term dynamic rupture simulation with long-term tectonic modelling. We do this by giving the stress output from one of the dynamic rupture simulation (of a single realization of fault roughness) to long term tectonic model (LTM) as initial condition and then run LTM over duration of seismic cycle. This short term and long term coupling enables us to understand how heterogeneous stresses due to fault geometry influence the dynamics of strain accumulation in the post-seismic and inter-seismic phase of seismic cycle.
An architecture for object-oriented intelligent control of power systems in space
NASA Technical Reports Server (NTRS)
Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.
1993-01-01
A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
López, Dina L.; Smith, Leslie; Storey, Michael L.; Nielson, Dennis L.
1994-01-01
The hydrothermal systems of the Basin and Range Province are often located at or near major range bounding normal faults. The flow of fluid and energy at these faults is affected by the advective transfer of heat and fluid from an to the adjacent mountain ranges and valleys, This paper addresses the effect of the exchange of fluid and energy between the country rock, the valley fill sediments, and the fault zone, on the fluid and heat flow regimes at the fault plane. For comparative purposes, the conditions simulated are patterned on Leach Hot Springs in southern Grass Valley, Nevada. Our simulations indicated that convection can exist at the fault plane even when the fault is exchanging significant heat and fluid with the surrounding country rock and valley fill sediments. The temperature at the base of the fault decreased with increasing permeability of the country rock. Higher groundwater discharge from the fault and lower temperatures at the base of the fault are favored by high country rock permabilities and fault transmissivities. Preliminary results suggest that basal temperatures and flow rates for Leach Hot Springs can not be simulated with a fault 3 km deep and an average regional heat flow of 150 mW/m2 because the basal temperature and mass discharge rates are too low. A fault permeable to greater depths or a higher regional heat flow may be indicated for these springs.
The Mentawai forearc sliver off Sumatra: A model for a strike-slip duplex at a regional scale
NASA Astrophysics Data System (ADS)
Berglar, Kai; Gaedicke, Christoph; Ladage, Stefan; Thöle, Hauke
2017-07-01
At the Sumatran oblique convergent margin the Mentawai Fault and Sumatran Fault zones accommodate most of the trench parallel component of strain. These faults bound the Mentawai forearc sliver that extends from the Sunda Strait to the Nicobar Islands. Based on multi-channel reflection seismic data, swath bathymetry and high resolution sub-bottom profiling we identified a set of wrench faults obliquely connecting the two major fault zones. These wrench faults separate at least four horses of a regional strike-slip duplex forming the forearc sliver. Each horse comprises an individual basin of the forearc with differing subsidence and sedimentary history. Duplex formation started in Mid/Late Miocene southwest of the Sunda Strait. Initiation of new horses propagated northwards along the Sumatran margin over 2000 km until Early Pliocene. These results directly link strike-slip tectonics to forearc evolution and may serve as a model for basin evolution in other oblique subduction settings.
NASA Astrophysics Data System (ADS)
van Wijk, J.; Axen, G.; Abera, R.
2017-11-01
We present a model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal elastic models of deformation, field observations, and fault theory, and is generally applicable to basin-scale features, but predicts some intra-basin structural features. Geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step-over, which results from the forming phase of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement, and the fault tips propagate beyond the rift basin, increasing the distance between the fault tips and pull-apart basin center. Because uplift is concentrated near the fault tips, the sediment source areas may rejuvenate and migrate over time. Rift flank uplift results from compression along the flank of the basin. With increasing strike-slip movement the basins deepen and lengthen. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because basin-bounding strike-slip systems tend to straighten and connect as they evolve. The models show that larger length-to-width ratios with overlapping faults are least likely to form basin-crossing faults, and pull-apart basins with this geometry are thus most likely to progress to continental rupture. In the Gulf of California, larger length-to-width ratios are found in the southern Gulf, which is the region where continental breakup occurred rapidly. The initial geometry in the northern Gulf of California and Salton Trough at 6 Ma may have been one of widely-spaced master strike-slip faults (lower length-to-width ratios), which our models suggest inhibits continental breakup and favors straightening of the strike-slip system by formation of basin-crossing faults within the step-over, as began 1.2 Ma when the San Jacinto and Elsinore - Cerro Prieto fault systems formed.
NASA Astrophysics Data System (ADS)
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL
NASA Astrophysics Data System (ADS)
Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun
2018-03-01
A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.
NASA Astrophysics Data System (ADS)
Xing, Yan; Kulatilake, P. H. S. W.; Sandbak, L. A.
2018-02-01
The stability of the rock mass around the tunnels in an underground mine was investigated using the distinct element method. A three-dimensional model was developed based on the available geological, geotechnical, and mine construction information. It incorporates a complex lithological system, persistent and non-persistent faults, and a complex tunnel system including backfilled tunnels. The strain-softening constitutive model was applied for the rock masses. The rock mass properties were estimated using the Hoek-Brown equations based on the intact rock properties and the RMR values. The fault material behavior was modeled using the continuously yielding joint model. Sequential construction and rock supporting procedures were simulated based on the way they progressed in the mine. Stress analyses were performed to study the effect of the horizontal in situ stresses and the variability of rock mass properties on tunnel stability, and to evaluate the effectiveness of rock supports. The rock mass behavior was assessed using the stresses, failure zones, deformations around the tunnels, and the fault shear displacement vectors. The safety of rock supports was quantified using the bond shear and bolt tensile failures. Results show that the major fault and weak interlayer have distinct influences on the displacements and stresses around the tunnels. Comparison between the numerical modeling results and the field measurements indicated the cases with the average rock mass properties, and the K 0 values between 0.5 and 1.25 provide satisfactory agreement with the field measurements.
NASA Astrophysics Data System (ADS)
Jiang, Zhongshan; Yuan, Linguo; Huang, Dingfa; Yang, Zhongrong; Chen, Weifeng
2017-12-01
We reconstruct two types of fault models associated with the 2008 Mw 7.9 Wenchuan earthquake, one is a listric fault connecting a shallowing sub-horizontal detachment below ∼20 km depth (fault model one, FM1) and the other is a group of more steeply dipping planes further extended to the Moho at ∼60 km depth (fault model two, FM2). Through comparative analysis of the coseismic inversion results, we confirm that the coseismic models are insensitive to the above two type fault geometries. We therefore turn our attention to the postseismic deformation obtained from GPS observations, which can not only impose effective constraints on the fault geometry but also, more importantly, provide valuable insights into the postseismic afterslip. Consequently, FM1 performs outstandingly in the near-, mid-, and far-field, whether considering the viscoelastic influence or not. FM2 performs more poorly, especially in the data-model consistency in the near field, which mainly results from the trade-off of the sharp contrast of the postseismic deformation on both sides of the Longmen Shan fault zone. Accordingly, we propose a listric fault connecting a shallowing sub-horizontal detachment as the optimal fault geometry for the Wenchuan earthquake. Based on the inferred optimal fault geometry, we analyse two characterized postseismic deformation phenomena that differ from the coseismic patterns: (1) the postseismic opposite deformation between the Beichuan fault (BCF) and Pengguan fault (PGF) and (2) the slightly left-lateral strike-slip motions in the southwestern Longmen Shan range. The former is attributed to the local left-lateral strike-slip and normal dip-slip components on the shallow BCF. The latter places constraints on the afterslip on the southwestern BCF and reproduces three afterslip concentration areas with slightly left-lateral strike-slip motions. The decreased Coulomb Failure Stress (CFS) change ∼0.322 KPa, derived from the afterslip with viscoelastic influence removed at the hypocentre of the Lushan earthquake, indicates that the postseismic left-lateral strike-slip and normal dip-slip motions may have a mitigative effect on the fault loading in the southwestern Longmen Shan range. Nevertheless, it is much smaller than the total increased CFS changes (∼8.368 KPa) derived from the coseismic and viscoelastic deformations.
NASA Astrophysics Data System (ADS)
Heinlein, S. N.
2013-12-01
Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
Do mesoscale faults in a young fold belt indicate regional or local stress?
NASA Astrophysics Data System (ADS)
Kokado, Akihiro; Yamaji, Atsushi; Sato, Katsushi
2017-04-01
The result of paleostress analyses of mesoscale faults is usually thought of as evidence of a regional stress. On the other hand, the recent advancement of the trishear modeling has enabled us to predict the deformation field around fault-propagation folds without the difficulty of assuming paleo mechanical properties of rocks and sediments. We combined the analysis of observed mesoscale faults and the trishear modeling to understand the significance of regional and local stresses for the formation of mesoscale faults. To this end, we conducted the 2D trishear inverse modeling with a curved thrust fault to predict the subsurface structure and strain field of an anticline, which has a more or less horizontal axis and shows a map-scale plane strain perpendicular to the axis, in the active fold belt of Niigata region, central Japan. The anticline is thought to have been formed by fault-propagation folding under WNW-ESE regional compression. Based on the attitudes of strata and the positions of key tephra beds in Lower Pleistocene soft sediments cropping out at the surface, we obtained (1) a fault-propagation fold with the fault tip at a depth of ca. 4 km as the optimal subsurface structure, and (2) the temporal variation of deformation field during the folding. We assumed that mesoscale faults were activated along the direction of maximum shear strain on the faults to test whether the fault-slip data collected at the surface were consistent with the deformation in some stage(s) of folding. The Wallace-Bott hypothesis was used to estimate the consistence of faults with the regional stress. As a result, the folding and the regional stress explained 27 and 33 of 45 observed faults, respectively, with the 11 faults being consistent with the both. Both the folding and regional one were inconsistent with the remaining 17 faults, which could be explained by transfer faulting and/or the gravitational spreading of the growing anticline. The lesson we learnt from this work was that we should pay attention not only to regional but also to local stresses to interpret the results of paleostress analysis in the shallow levels of young orogenic belts.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; Arrowsmith, Ramon
2010-05-01
Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is observed in laboratory friction experiments and expressed in an [a-b] term in Rate-State-Friction (RSF) theory. Patches in the seismic zone are incrementally loaded during the interseismic phase. An earthquake initiates if shear stress along at least one (seismic) patch exceeds its static frictional strength and may grow in size due to elastic interaction with other fault patches (static stress transfer). Aside from investigating slip-rate variations due to the elastic interactions within a fault system with this tool, we want to show how such modeling results can be very useful in exploring the physics underlying the patterns that the paleoseismology sees and that those methods (simulation and observations) can be merged, with both making important contributions. Using FIMozFric, we generated synthetic seismic records for a large number of fault geometries and structural scenarios to investigate along-fault slip accumulation patterns and the variability of slip at a point. Our simulations show that fault geometric complexity and the accompanied fault interactions and multi-fault ruptures may cause temporal deviations from the average fault slip-rate, in other words phases of earthquake clustering or relative quiescence. Slip-rates along faults within an interacting fault system may change even when the loading function (stressing rate) remains constant and the magnitude of slip rate change is suggested to be proportional to the magnitude of fault interaction. Thus, spatially isolated and structurally mature faults are expected to experience less slip-rate changes than strongly interacting and less mature faults. The magnitude of slip-rate change may serve as a proxy for the magnitude of fault interaction and vice versa.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
He, Kun; Yang, Zhijun; Bai, Yun; Long, Jianyu; Li, Chuan
2018-01-01
Health condition is a vital factor affecting printing quality for a 3D printer. In this work, an attitude monitoring approach is proposed to diagnose the fault of the delta 3D printer using support vector machines (SVM). An attitude sensor was mounted on the moving platform of the printer to monitor its 3-axial attitude angle, angular velocity, vibratory acceleration and magnetic field intensity. The attitude data of the working printer were collected under different conditions involving 12 fault types and a normal condition. The collected data were analyzed for diagnosing the health condition. To this end, the combination of binary classification, one-against-one with least-square SVM, was adopted for fault diagnosis modelling by using all channels of attitude monitoring data in the experiment. For comparison, each one channel of the attitude monitoring data was employed for model training and testing. On the other hand, a back propagation neural network (BPNN) was also applied to diagnose fault using the same data. The best fault diagnosis accuracy (94.44%) was obtained when all channels of the attitude monitoring data were used with SVM modelling. The results indicate that the attitude monitoring with SVM is an effective method for the fault diagnosis of delta 3D printers. PMID:29690641
He, Kun; Yang, Zhijun; Bai, Yun; Long, Jianyu; Li, Chuan
2018-04-23
Health condition is a vital factor affecting printing quality for a 3D printer. In this work, an attitude monitoring approach is proposed to diagnose the fault of the delta 3D printer using support vector machines (SVM). An attitude sensor was mounted on the moving platform of the printer to monitor its 3-axial attitude angle, angular velocity, vibratory acceleration and magnetic field intensity. The attitude data of the working printer were collected under different conditions involving 12 fault types and a normal condition. The collected data were analyzed for diagnosing the health condition. To this end, the combination of binary classification, one-against-one with least-square SVM, was adopted for fault diagnosis modelling by using all channels of attitude monitoring data in the experiment. For comparison, each one channel of the attitude monitoring data was employed for model training and testing. On the other hand, a back propagation neural network (BPNN) was also applied to diagnose fault using the same data. The best fault diagnosis accuracy (94.44%) was obtained when all channels of the attitude monitoring data were used with SVM modelling. The results indicate that the attitude monitoring with SVM is an effective method for the fault diagnosis of delta 3D printers.
A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing
NASA Astrophysics Data System (ADS)
Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.
2017-11-01
Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.
Effects of Channel Modification on Detection and Dating of Fault Scarps
NASA Astrophysics Data System (ADS)
Sare, R.; Hilley, G. E.
2016-12-01
Template matching of scarp-like features could potentially generate morphologic age estimates for individual scarps over entire regions, but data noise and scarp modification limits detection of fault scarps by this method. Template functions based on diffusion in the cross-scarp direction may fail to accurately date scarps near channel boundaries. Where channels reduce scarp amplitudes, or where cross-scarp noise is significant, signal-to-noise ratios decrease and the scarp may be poorly resolved. In this contribution, we explore the bias in morphologic age of a complex scarp produced by systematic changes in fault scarp curvature. For example, fault scarps may be modified by encroaching channel banks and mass failure, lateral diffusion of material into a channel, or undercutting parallel to the base of a scarp. We quantify such biases on morphologic age estimates using a block offset model subject to two-dimensional linear diffusion. We carry out a synthetic study of the effects of two-dimensional transport on morphologic age calculated using a profile model, and compare these results to a well- studied and constrained site along the San Andreas Fault at Wallace Creek, CA. This study serves as a first step towards defining regions of high confidence in template matching results based on scarp length, channel geometry, and near-scarp topography.
An empirically based steady state friction law and implications for fault stability
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.
2016-04-01
Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.
NASA Technical Reports Server (NTRS)
Jammu, V. B.; Danai, K.; Lewicki, D. G.
1998-01-01
This paper presents the experimental evaluation of the Structure-Based Connectionist Network (SBCN) fault diagnostic system introduced in the preceding article. For this vibration data from two different helicopter gearboxes: OH-58A and S-61, are used. A salient feature of SBCN is its reliance on the knowledge of the gearbox structure and the type of features obtained from processed vibration signals as a substitute to training. To formulate this knowledge, approximate vibration transfer models are developed for the two gearboxes and utilized to derive the connection weights representing the influence of component faults on vibration features. The validity of the structural influences is evaluated by comparing them with those obtained from experimental RMS values. These influences are also evaluated ba comparing them with the weights of a connectionist network trained though supervised learning. The results indicate general agreement between the modeled and experimentally obtained influences. The vibration data from the two gearboxes are also used to evaluate the performance of SBCN in fault diagnosis. The diagnostic results indicate that the SBCN is effective in directing the presence of faults and isolating them within gearbox subsystems based on structural influences, but its performance is not as good in isolating faulty components, mainly due to lack of appropriate vibration features.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah
NASA Astrophysics Data System (ADS)
Withers, K.; Moschetti, M. P.
2017-12-01
We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Strength reduction factors for seismic analyses of buildings exposed to near-fault ground motions
NASA Astrophysics Data System (ADS)
Qu, Honglue; Zhang, Jianjing; Zhao, J. X.
2011-06-01
To estimate the near-fault inelastic response spectra, the accuracy of six existing strength reduction factors ( R) proposed by different investigators were evaluated by using a suite of near-fault earthquake records with directivity-induced pulses. In the evaluation, the force-deformation relationship is modelled by elastic-perfectly plastic, bilinear and stiffness degrading models, and two site conditions, rock and soil, are considered. The R-value ratio (ratio of the R value obtained from the existing R-expressions (or the R-µ- T relationships) to that from inelastic analyses) is used as a measurement parameter. Results show that the R-expressions proposed by Ordaz & Perez-Rocha are the most suitable for near-fault ground motions, followed by the Newmark & Hall and the Berrill et al. relationships. Based on an analysis using the near-fault ground motion dataset, new expressions of R that consider the effects of site conditions are presented and verified.
Selection of test paths for solder joint intermittent connection faults under DC stimulus
NASA Astrophysics Data System (ADS)
Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen
2018-06-01
The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.
NASA Astrophysics Data System (ADS)
Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.
2006-12-01
Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Garcia-Mayordomo, Julian
2017-04-01
Many studies have dealt with the calculation of fault-propagation fold growth rates considering a variety of kinematics models, from limb rotation to hinge migration models. In most cases, the different geometrical and numeric growth models are based on horizontal pre-growth strata architecture and a constant known slip rate. Here, we present the estimation of the vertical slip rate of the NE Segment of the Carrascoy Fault (SE Iberian Peninsula) from the geometrical modeling of a progressive unconformity developed on alluvial fan sediments with a high depositional slope. The NE Segment of the Carrascoy Fault is a left-lateral strike slip fault with reverse component belonging to the Eastern Betic Shear Zone, a major structure that accommodates most of the convergence between Iberian and Nubian tectonics plates in Southern Spain. The proximity of this major fault to the city of Murcia encourages the importance of carrying out paleosismological studies in order to determinate the Quaternary slip rate of the fault, a key geological parameter for seismic hazard calculations. This segment is formed by a narrow fault zone that articulates abruptly the northern edge of the Carrascoy Range with the Guadalentin Depression through high slope, short alluvial fans Upper-Middle Pleistocene in age. An outcrop in a quarry at the foot of this front reveals a progressive unconformity developed on these alluvial fan deposits, showing the important reverse component of the fault. The architecture of this unconformity is marked by well-developed calcretes on the top some of the alluvial deposits. We have determined the age of several of these calcretes by the Uranium-series disequilibrium dating method. The results obtained are consistent with recent published studies on the SW segment of the Carrascoy Fault that together with offset canals observed at a few locations suggest a net slip rate close to 1 m/ka.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Drew, L.J.
2003-01-01
A tectonic model useful in estimating the occurrence of undiscovered porphyry copper and polymetallic vein systems has been developed. This model is based on the manner in which magmatic and hydrothermal fluids flow and are trapped in fault systems as far-field stress is released in tectonic strain features above subducting plates (e.g. strike-slip fault systems). The structural traps include preferred locations for stock emplacement and tensional-shear fault meshes within the step-overs that localize porphyry- and vein-style deposits. The application of the model is illustrated for the porphyry copper and polymetallic vein deposits in the Central Slovakian Volcanic Field, Slovakia; the Ma??tra Mountains, Hungary; and the Apuseni Mountains, Romania.
Fault management for the Space Station Freedom control center
NASA Technical Reports Server (NTRS)
Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet
1992-01-01
This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.
MgB2-based superconductors for fault current limiters
NASA Astrophysics Data System (ADS)
Sokolovsky, V.; Prikhna, T.; Meerovich, V.; Eisterer, M.; Goldacker, W.; Kozyrev, A.; Weber, H. W.; Shapovalov, A.; Sverdun, V.; Moshchil, V.
2017-02-01
A promising solution of the fault current problem in power systems is the application of fast-operating nonlinear superconducting fault current limiters (SFCLs) with the capability of rapidly increasing their impedance, and thus limiting high fault currents. We report the results of experiments with models of inductive (transformer type) SFCLs based on the ring-shaped bulk MgB2 prepared under high quasihydrostatic pressure (2 GPa) and by hot pressing technique (30 MPa). It was shown that the SFCLs meet the main requirements to fault current limiters: they possess low impedance in the nominal regime of the protected circuit and can fast increase their impedance limiting both the transient and the steady-state fault currents. The study of quenching currents of MgB2 rings (SFCL activation current) and AC losses in the rings shows that the quenching current density and critical current density determined from AC losses can be 10-20 times less than the critical current determined from the magnetization experiments.
Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.
NASA Astrophysics Data System (ADS)
Titus, Sarah J.
The San Andreas fault system is a transpressional plate boundary characterized by sub-parallel dextral strike-slip faults separating internally deformed crustal blocks in central California. Both geodetic and geologic tools were used to understand the short- and long-term partitioning of deformation in both the crust and the lithospheric mantle across the plate boundary system. GPS data indicate that the short-term discrete deformation rate is ˜28 mm/yr for the central creeping segment of the San Andreas fault and increases to 33 mm/yr at +/-35 km from the fault. This gradient in deformation rates is interpreted to reflect elastic locking of the creeping segment at depth, distributed off-fault deformation, or some combination of these two mechanisms. These short-term fault-parallel deformation rates are slower than the expected geologic slip rate and the relative plate motion rate. Structural analysis of folds and transpressional kinematic modeling were used to quantify long-term distributed deformation adjacent to the Rinconada fault. Folding accommodates approximately 5 km of wrench deformation, which translates to a deformation rate of ˜1 mm/yr since the start of the Pliocene. Integration with discrete offset on the Rinconada fault indicates that this portion of the San Andreas fault system is approximately 80% strike-slip partitioned. This kinematic fold model can be applied to the entire San Andreas fault system and may explain some of the across-fault gradient in deformation rates recorded by the geodetic data. Petrologic examination of mantle xenoliths from the Coyote Lake basalt near the Calaveras fault was used to link crustal plate boundary deformation at the surface with models for the accommodation of deformation in the lithospheric mantle. Seismic anisotropy calculations based on xenolith petrofabrics suggest that an anisotropic mantle layer thickness of 35-85 km is required to explain the observed shear wave splitting delay times in central California. The available data are most consistent with models for a broad zone of distributed deformation in the lithospheric mantle.
Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2
Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.
2008-01-01
This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.
NASA Astrophysics Data System (ADS)
Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.
2017-12-01
Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siler, Drew L; Faulds, James E; Mayhew, Brett
2013-04-16
Geothermal systems in the Great Basin, USA, are controlled by a variety of fault intersection and fault interaction areas. Understanding the specific geometry of the structures most conducive to broad-scale geothermal circulation is crucial to both the mitigation of the costs of geothermal exploration (especially drilling) and to the identification of geothermal systems that have no surface expression (blind systems). 3-dimensional geologic modeling is a tool that can elucidate the specific stratigraphic intervals and structural geometries that host geothermal reservoirs. Astor Pass, NV USA lies just beyond the northern extent of the dextral Pyramid Lake fault zone near the boundarymore » between two distinct structural domains, the Walker Lane and the Basin and Range, and exhibits characteristics of each setting. Both northwest-striking, left-stepping dextral faults of the Walker Lane and kinematically linked northerly striking normal faults associated with the Basin and Range are present. Previous studies at Astor Pass identified a blind geothermal system controlled by the intersection of west-northwest and north-northwest striking dextral-normal faults. Wells drilled into the southwestern quadrant of the fault intersection yielded 94°C fluids, with geothermometers suggesting a maximum reservoir temperature of 130°C. A 3-dimensional model was constructed based on detailed geologic maps and cross-sections, 2-dimensional seismic data, and petrologic analysis of the cuttings from three wells in order to further constrain the structural setting. The model reveals the specific geometry of the fault interaction area at a level of detail beyond what geologic maps and cross-sections can provide.« less
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
GIA induced intraplate seismicity in northern Central Europe
NASA Astrophysics Data System (ADS)
Brandes, Christian; Steffen, Holger; Steffen, Rebekka; Wu, Patrick
2015-04-01
Though northern Central Europe is regarded as a low seismicity area (Leydecker and Kopera, 1999), several historic earthquakes with intensities of up to VII affected the area in the last 1200 years (Leydecker, 2011). The trigger for these seismic events is not sufficiently investigated yet. Based on the combination of historic earthquake epicentres with the most recent fault maps we show that the historic seismicity concentrated at major reverse faults. There is no evidence for significant historic earthquakes along normal faults in northern Central Europe. The spatial and temporal distribution of earthquakes (clusters that shift from time to time) implies that northern Central Europe behaves like a typical intraplate tectonic region as demonstrated for other intraplate settings (Liu et al., 2000) We utilized Finite Element models that describe the process of glacial isostatic adjustment to analyse the fault behaviour. We use the change in Coulomb Failure Stress (dCFS) to represent the minimum stress required to reach faulting. A negative dCFS value indicates that the fault is stable, while a positive value means that GIA stress is potentially available to induce faulting or cause fault instability or failure unless released temporarily by an earthquake. The results imply that many faults in Central Europe are postglacial faults, though they developed outside the glaciated area. This is supported by the characteristics of the dCFS graphs, which indicate the likelihood that an earthquake is related to GIA. Almost all graphs show a change from negative to positive values during the deglaciation phase. This observation sheds new light on the distribution of post-glacial faults in general. Based on field data and the numerical simulations we developed the first consistent model that can explain the occurrence of deglaciation seismicity and more recent historic earthquakes in northern Central Europe. Based on our model, the historic seismicity in northern Central Europe can be regarded as a kind of aftershock sequence of the GIA induced-seismicity. References Leydecker, G. and Kopera, J.R. Seismological hazard assessment for a site in Northern Germany, an area of low seismicity. Engineering Geology 52, 293-304 (1999). Leydecker, G. Erdbebenkatalog für die Bundesrepublik Deutschland mit Randgebieten für die Jahre 800-2008. Geologisches Jahrbuch Reihe E, 198 pp., (2011) Liu, M., Stein, S. and Wang, H. 2000 years of migrating earthquakes in north China: How earthquakes in midcontinents differ from those at plate boundaries. Lithosphere 3, 128-132, (2011).
NASA Astrophysics Data System (ADS)
Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.
2017-12-01
Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios
NASA Astrophysics Data System (ADS)
Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George
2017-04-01
The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the observed surface rupture displacement. Stress transfer analysis was also performed in the five modelled seismogenic sources, using seismologically defined parameters. The resulting stress transfer pattern, based on the sequence of events, shows that the causative fault of each event was influenced by loading from the previous ones.
NASA Astrophysics Data System (ADS)
Cheng, Li-Wei; Lee, Jian-Cheng; Hu, Jyr-Ching; Chen, Horng-Yue
2009-03-01
The Chengkung earthquake with ML = 6.6 occurred in eastern Taiwan at 12:38 local time on December 10th 2003. Based on the main shock relocation and aftershock distribution, the Chengkung earthquake occurred along the previously recognized N20°E trending Chihshang fault. This event did not cause human loss, but significant cracks developed at the ground surface and damaged some buildings. After 1951 Taitung earthquake, there was no larger ML > 6 earthquake occurred in this region until the Chengkung earthquake. As a result, the Chengkung earthquake is a good opportunity to study the seismogenic structure of the Chihshang fault. The coseismic displacements recorded by GPS show a fan-shaped distribution with maximal displacement of about 30 cm near the epicenter. The aftershocks of the Chengkung earthquake revealing an apparent linear distribution helps us to construct the clear fault geometry of the Chihshang fault. In this study, we employ a half-space angular elastic dislocation model with GPS observations to figure out the slip distribution and seismological behavior of the Chengkung earthquake on the Chihshang fault. The elastic half-space dislocation model reveals that the Chengkung earthquake is a thrust event with minor left-lateral strike-slip component. The maximum coseismic slip is located around the depth of 20 km and up to 1.1 m. The slips are gradually decreased to less than 10 cm near the surface part of the Chihshang fault. The seismogenic fault plane, which is constructed by the delineation of the aftershocks, demonstrates that the Chihshang fault is a high-angle fault. However the fault plane changes to a flat plane at depth of 20 km. In addition, a significant part of the measured deformation across the surface fault zone for this earthquake can be attributed to postseismic creep. The postseismic elastic dislocation model shows that most afterslips are distributed to the upper level of the Chihshang fault. And most afterslips consist of both of dip- and left-lateral slip. The model results show that the Chihshang fault may be partially locked or damped near surface during coseismic slip. After the mainshock, the strain, which cumulated near the surface, was released by postseismic creep resulting in significant postseismic deformation.
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2016-04-01
Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation with respect to the faulting pattern and evolution. Secondly, using MOVE software, we extracted the positions of fault tips and folds every 5 mm of displacement on the master fault. Analyzing these positions in all of the experiments, we found that the growth rate of the faults and the related fold shape vary depending on the number of discontinuities in the clay medium. Other results can be summarized as follows: 1) the fault growth rate is not constant, but varies especially while the new faults interacts with precuts; 2) the new faults tend to crosscut the discontinuities when the angle between them is approximately 90°; 3) the trishear zone change its shape during the experiments especially when the main fault interacts with the discontinuities.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
NASA Astrophysics Data System (ADS)
Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.
2007-08-01
For Earth, data sets and models have shown that for a fault loaded by a constant remote stress, the maximum displacement on the fault is linearly related to its length by d = gamma · l [1]. The scaling and structure is self-similar through time [1]. The displacement-length relationship can provide useful information about the tectonic regime.We intend to use it to estimate the seismic moment released during the formation of Martian fault systems and to improve the seismicity model [2]. Only few data sets have been measured for extraterrestrial faults. One reason is the limited number of reliable topographic data sets. We used high-resolution Digital Elevation Models (DEM) [3] derived from HRSC image data taken from Mars Express orbit 1437. This orbit covers an area in the Acheron Fossae region, a rift-like graben system north of Olympus Mons with a "banana"-shaped topography [4]. It has a fault trend which runs approximately WNW-ESE. With an interactive IDL-based software tool [5] we measured the fault length and the vertical offset for 34 faults. We evaluated the height profile by plotting the fault lengths l vs. their observed maximum displacement (dmax-model). Additionally, we computed the maximum displacement of an elliptical fault scarp where the plane has the same area as in the observed case (elliptical model). The integration over the entire fault length necessary for the computation of the area supresses the "noise" introduced by local topographic effects like erosion or cratering. We should also mention that fault planes dipping 60 degree are usually assumed for Mars [e.g., 6] and even shallower dips have been found for normal fault planes [7]. This dip angle is used to compute displacement from vertical offset via d = h/(h*sinα), where h is the observed topographic step height, and ? is the fault dip angle. If fault dip angles of 30 degree are considered, the displacement differs by 40% from the one of dip angles of 60 degree. Depending on the data quality, especially the lighting conditions in the region, different errors can be made by determining the various values. Based on our experiences, we estimate that the error measuring the length of the fault is smaller than 10% and that the measurement error of the offset is smaller than 5%. Furthermore the horizontal resolution of the HRSC images is 12.5 m/pixel or 25 m/pixel and of the DEM derived from HRSC images 50 m/pixel because of re-sampling. That means that image resolution does not introduce a significant error at fault lengths in kilometer range. For the case of Mars it is known that in the growth of fault populations linkage is an essential process [8]. We obtained the d/l-values from selected examples of faults that were connected via a relay ramp. The error of ignoring an existing fault linkage is 20% to 50% if the elliptical fault model is used and 30% to 50% if only the dmax value is used to determine d l . This shows an advantage of the elliptic model. The error increases if more faults are linked, because the underestimation of the relevant length gets worse the longer the linked system is. We obtained a value of gamma=d/l of about 2 · 10-2 for the elliptic model and a value of approximately 2.7 · 10-2 for the dmax-model. The data show a relatively large scatter, but they can be compared to data from terrestrial faults ( d/l= ~1 · 10-2...5 · 10-2; [9] and references therein). In a first inspection of the Acheron Fossae 2 region in the orbit 1437 we could confirm our first observations [10]. If we consider fault linkage the d/l values shift towards lower d/l-ratios, since linkage means that d remains essentially constant, but l increases significantly. We will continue to measure other faults and obtain values for linked faults and relay ramps. References: [1] Cowie, P. A. and Scholz, C. H. (1992) JSG, 14, 1133-1148. [2] Knapmeyer, M. et al. (2006) JGR, 111, E11006. [3] Neukum, G. et al. (2004) ESA SP-1240, 17-35. [4] Kronberg, P. et al. (2007) J. Geophys. Res., 112, E04005, doi:10.1029/2006JE002780. [5] Hauber, E. et al. (2007) LPSC, XXXVIII, abstract 1338. [6] Wilkins, S. J. et al. (2002) GRL, 29, 1884, doi: 10.1029/2002GL015391. [7] Fueten, F. et al. (2007) LPSC, XXXVIII, abstract 1388. [8] Schultz, R. A. (2000) Tectonophysics, 316, 169-193. [9] Schultz, R. A. et al. (2006) JSG, 28, 2182-2193. [10] Hauber, E. et al. (2007) 7th Mars Conference, submitted.
NO-FAULT COMPENSATION FOR MEDICAL INJURIES: TRENDS AND CHALLENGES.
Kassim, Puteri Nemie
2014-12-01
As an alternative to the tort or fault-based system, a no-fault compensation system has been viewed as having the potential to overcome problems inherent in the tort system by providing fair, speedy and adequate compensation for medically injured victims. Proponents of the suggested no-fault compensation system have argued that this system is more efficient in terms of time and money, as well as in making the circumstances in which compensation is paid, much clearer. However, the arguments against no-fault compensation systems are mainly on issues of funding difficulties, accountability and deterrence, particularly, once fault is taken out of the equation. Nonetheless, the no-fault compensation system has been successfully implemented in various countries but, at the same time, rejected in some others, as not being implementable. In the present trend, the no-fault system seems to fit the needs of society by offering greater access to justice for medically injured victims and providing a clearer "road map" towards obtaining suitable redress. This paper aims at providing the readers with an overview of the characteristics of the no fault compensation system and some examples of countries that have implemented it. Qualitative Research-Content Analysis. Given the many problems and hurdles posed by the tort or fault-based system, it is questionable that it can efficiently play its role as a mechanism that affords fair and adequate compensation for victims of medical injuries. However, while a comprehensive no-fault compensation system offers a tempting alternative to the tort or fault-based system, to import such a change into our local scenario requires a great deal of consideration. There are major differences, mainly in terms of social standing, size of population, political ideology and financial commitment, between Malaysia and countries that have successfully implemented no-fault systems. Nevertheless, implementing a no-fault compensation system in Malaysia is not entirely impossible. A custom-made no-fault model tailored to suit our local scenario can be promising, provided that a thorough research is made, assessing the viability of a no-fault system in Malaysia, addressing the inherent problems and, consequently, designing a workable no-fault system in Malaysia.
Geometry of Thrust Faults Beneath Amenthes Rupes, Mars
NASA Technical Reports Server (NTRS)
Vidal, A.; Mueller, K. M.; Golombek, M. P.
2005-01-01
Amenthes Rupes is a 380 km-long lobate fault scarp located in the eastern hemisphere of Mars near the dichotomy boundary. The scarp is marked by about 1 km of vertical separation across a northeast dipping thrust fault (top to the SW) and offsets heavily-cratered terrain of Late Noachian age, the visible portion of which was in place by 3.92 Ga and the buried portion in place between 4.08 and 4.27 Ga. The timing of scarp formation is difficult to closely constrain. Previous geologic mapping shows that near the northern end of Amenthes Rupes, Hesperian age basalts terminate at the scarp, suggesting that fault slip predated the emplacement of these flows at 3.69 to 3.9 Ga. Maxwell and McGill also suggest the faulting ceased before the final emplacement of the Late Hesperian lavas on Isidis Planitia. The trend of the faults at Amenthes, like many thrust faults at the dichotomy boundary, parallels the boundary itself. Schultz and Watters used a dislocation modeling program to match surface topography and vertical offset of the scarp at Amenthes Rupes, varying the dip and depth of faulting, assuming a slip of 1.5 km on the fault. They modeled faulting below Amenthes Rupes as having a dip of between 25 and 30 degrees and a depth of 25 to 35 km, based on the best match to topography. Assuming a 25 degree dip and surface measurements of vertical offset of between 0.3 and 1.2 km, Watters later estimated the maximum displacement on the Amenthes Rupes fault to be 2.90 km. However, these studies did not determine the geometry of the thrust using quantitative constraints that included shortening estimates. Amenthes Rupes deforms large preexisting impact craters. We use these craters to constrain shortening across the scarp and combine this with vertical separation to infer fault geometry. Fault dip was also estimated using measurements of scarp morphology. Measurements were based on 460 m (1/128 per pixel) digital elevation data from the Mars Orbiter Laser Altimeter (MOLA), an instrument on the Mars Global Surveyor (MGS) satellite.
The 2013, Mw 7.7 Balochistan earthquake, energetic strike-slip reactivation of a thrust fault
NASA Astrophysics Data System (ADS)
Avouac, Jean-Philippe; Ayoub, Francois; Wei, Shengji; Ampuero, Jean-Paul; Meng, Lingsen; Leprince, Sebastien; Jolivet, Romain; Duputel, Zacharie; Helmberger, Don
2014-04-01
We analyse the Mw 7.7 Balochistan earthquake of 09/24/2013 based on ground surface deformation measured from sub-pixel correlation of Landsat-8 images, combined with back-projection and finite source modeling of teleseismic waveforms. The earthquake nucleated south of the Chaman strike-slip fault and propagated southwestward along the Hoshab fault at the front of the Kech Band. The rupture was mostly unilateral, propagated at 3 km/s on average and produced a 200 km surface fault trace with purely strike-slip displacement peaking to 10 m and averaging around 6 m. The finite source model shows that slip was maximum near the surface. Although the Hoshab fault is dipping by 45° to the North, in accordance with its origin as a thrust fault within the Makran accretionary prism, slip was nearly purely strike-slip during that earthquake. Large seismic slip on such a non-optimally oriented fault was enhanced possibly due to the influence of the free surface on dynamic stresses or to particular properties of the fault zone allowing for strong dynamic weakening. Strike-slip faulting on thrust fault within the eastern Makran is interpreted as due to eastward extrusion of the accretionary prism as it bulges out over the Indian plate. Portions of the Makran megathrust, some thrust faults in the Kirthar range and strike-slip faults within the Chaman fault system have been brought closer to failure by this earthquake. Aftershocks cluster within the Chaman fault system north of the epicenter, opposite to the direction of rupture propagation. By contrast, few aftershocks were detected in the area of maximum moment release. In this example, aftershocks cannot be used to infer earthquake characteristics.
NASA Astrophysics Data System (ADS)
Yang, Z.; Juanes, R.
2015-12-01
The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.
Measurement of fault latency in a digital avionic miniprocessor
NASA Technical Reports Server (NTRS)
Mcgough, J. G.; Swern, F. L.
1981-01-01
The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are presented. The failure detection coverage of comparison-monitoring and a typical avionics CPU self-test program was determined. The specific tasks and experiments included: (1) inject randomly selected gate-level and pin-level faults and emulate six software programs using comparison-monitoring to detect the faults; (2) based upon the derived empirical data develop and validate a model of fault latency that will forecast a software program's detecting ability; (3) given a typical avionics self-test program, inject randomly selected faults at both the gate-level and pin-level and determine the proportion of faults detected; (4) determine why faults were undetected; (5) recommend how the emulation can be extended to multiprocessor systems such as SIFT; and (6) determine the proportion of faults detected by a uniprocessor BIT (built-in-test) irrespective of self-test.
NASA Astrophysics Data System (ADS)
Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil
2014-05-01
The San Andreas fault in southern California records only few large-magnitude earthquakes in historic time, and the recent activity is confined primarily on irregular and discontinuous strike-slip and thrust fault strands at shallow depths of ~5-20 km. Despite this fact, slip along the San Andreas fault is calculated to c. 35 mm/yr based on c.160 km total right lateral displacement for the southern segment of the fault in the last c. 8 Ma. Field observations also reveal complex fault strands and multiple events of deformation. The presently diffuse high-magnitude crustal movements may be explained by the deformation being largely distributed along more gently dipping reverse faults in fold-thrust belts, in contrast to regions to the north where deformation is less partitioned and localized to narrow strike-slip fault zones. In the Mecca Hills of the Salton trough transpressional deformation of an uplifted segment of the San Andreas fault in the last ca. 4.0 My is expressed by very complex fault-oblique and fault-parallel (en echelon) folding, and zones of uplift (fold-thrust belts), basement-involved reverse and strike-slip faults and accompanying multiple and pervasive cataclasis and conjugate fracturing of Miocene to Pleistocene sedimentary strata. Our structural analysis of the Mecca Hills addresses the kinematic nature of the San Andreas fault and mechanisms of uplift and strain-stress distribution along bent fault strands. The San Andreas fault and subsidiary faults define a wide spectrum of kinematic styles, from steep localized strike-slip faults, to moderate dipping faults related to oblique en echelon folds, and gently dipping faults distributed in fold-thrust belt domains. Therefore, the San Andreas fault is not a through-going, steep strike-slip crustal structure, which is commonly the basis for crustal modeling and earthquake rupture models. The fault trace was steep initially, but was later multiphase deformed/modified by oblique en echelon folding, renewed strike-slip movements and contractile fold-thrust belt structures. Notably, the strike-slip movements on the San Andreas fault were transformed outward into the surrounding rocks as oblique-reverse faults to link up with the subsidiary Skeleton Canyon fault in the Mecca Hills. Instead of a classic flower structure model for this transpressional uplift, the San Andreas fault strands were segmented into domains that record; (i) early strike-slip motion, (ii) later oblique shortening with distributed deformation (en echelon fold domains), followed by (iii) localized fault-parallel deformation (strike-slip) and (iv) superposed out-of-sequence faulting and fault-normal, partitioned deformation (fold-thrust belt domains). These results contribute well to the question if spatial and temporal fold-fault branching and migration patterns evolving along non-vertical strike-slip fault segments can play a role in the localization of earthquakes along the San Andreas fault.
NASA Astrophysics Data System (ADS)
Altintas, Ali Can
The goal of this project is to combine gravity measurements with geologic observations to better understand the "Big Bend" of the San Andreas Fault (SAF) and its role in producing hydrocarbon-bearing structures in the southern Central Valley of California. The SAF is the main plate boundary structure between the Pacific and North American plates and accommodates ?35 mm/yr of dextral motion. The SAF can be divided into three main parts: the northern, central and southern segments. The boundary between the central and southern segments is the "Big Bend", which is characterized by an ≈30°, eastward bend. This fault curvature led to the creation of a series of roughly east-west thrust faults and the transverse mountain ranges. Four high-resolution gravity transects were conducted across locations on either side of the bend. A total of 166 new gravity measurements were collected. Previous studies suggest significantly inclined dip angle for the San Andreas Fault in the Big Bend area. Yet, our models indicate that the San Andreas Fault is near vertical in the Big Bend area. Also gravity cross-section models suggest that flower structures occur on either side of the bend. These structures are dominated by sedimentary rocks in the north and igneous rocks in the south. The two northern transects in the Carrizo plains have an ≈-70 mgal Bouguer anomaly. The SAF has a strike of ≈315° near these transects. The northern transects are characterized by multiple fault strands which cut marine and terrestrial Miocene sedimentary rocks as well as Quaternary alluvial valley deposits. These fault strands are characterized by ?6 mgal short wavelength variations in the Bouguer gravity anomaly, which correspond to low density fault gouge and fault splays that juxtapose rocks of varying densities. The southern transects cross part of the SAF with a strike of 285°, have a Bouguer anomaly of ≈-50 mgal and are characterized by a broad 15 mgal high. At this location the rocks on either side of the fault are Proterozoic - Cretaceous metamorphic or/and plutonic rocks. Previous work based on geologic mapping hypothesized the existence of a shallow, low angle Abel Mountain Thrust in which crystalline rocks were thrust over Miocene sedimentary rocks, near Apache Saddle. However, gravity models indicate the crystalline rocks are vertically extensive and form a positive flower structure bounded by high angle faults. Also, based on the thickness of fault adjacent sedimentary cover, the gravity models suggest a minimum exhumation of 5-6 km for crystalline rocks in the south. Assuming exhumation began with the switch from the transtensional San Gabriel Fault to transpressional San Andreas Fault at approximately 5 Ma, this indicates exhumation rates of 1 km/Ma. Overall, the broad gravity highs observed along the southern transects are due to uplift of basement rocks in this area.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
NASA Astrophysics Data System (ADS)
Parra, J.; Vicuña, Cristián Molina
2017-08-01
Planetary gearboxes are important components of many industrial applications. Vibration analysis can increase their lifetime and prevent expensive repair and safety concerns. However, an effective analysis is only possible if the vibration features of planetary gearboxes are properly understood. In this paper, models are used to study the frequency content of planetary gearbox vibrations under non-fault and different fault conditions. Two different models are considered: phenomenological model, which is an analytical-mathematical formulation based on observation, and lumped-parameter model, which is based on the solution of the equations of motion of the system. Results of both models are not directly comparable, because the phenomenological model provides the vibration on a fixed radial direction, such as the measurements of the vibration sensor mounted on the outer part of the ring gear. On the other hand, the lumped-parameter model provides the vibrations on the basis of a rotating reference frame fixed to the carrier. To overcome this situation, a function to decompose the lumped-parameter model solutions to a fixed reference frame is presented. Finally, comparisons of results from both model perspectives and experimental measurements are presented.
Operational Models of Infrastructure Resilience
2015-01-01
Wiemer S. A stochastic forecast of California earth - quakes based on fault slip and smoothed seismicity. Bulletin of the Seismological Society of America...California faults. Journal of Geophysical Research: Solid Earth , 2011; 116:1978–2012. 29. Hiemer S, Jackson DD, Wang Q, Kagan YY, Woessner J, Zechar J
Research of influence of open-winding faults on properties of brushless permanent magnets motor
NASA Astrophysics Data System (ADS)
Bogusz, Piotr; Korkosz, Mariusz; Powrózek, Adam; Prokop, Jan; Wygonik, Piotr
2017-12-01
The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
NASA Astrophysics Data System (ADS)
Zhang, X.; Sagiya, T.
2015-12-01
The earth's crust can be divided into the brittle upper crust and the ductile lower crust based on the deformation mechanism. Observations shows heterogeneities in the lower crust are associated with fault zones. One of the candidate mechanisms of strain concentration is shear heating in the lower crust, which is considered by theoretical studies for interplate faults [e.g. Thatcher & England 1998, Takeuchi & Fialko 2012]. On the other hand, almost no studies has been done for intraplate faults, which are generally much immature than interplate faults and characterized by their finite lengths and slow displacement rates. To understand the structural characteristics in the lower crust and its temporal evolution in a geological time scale, we conduct a 2-D numerical experiment on the intraplate strike slip fault. The lower crust is modeled as a 20km thick viscous layer overlain by rigid upper crust that has a steady relative motion across a vertical strike slip fault. Strain rate in the lower crust is assumed to be a sum of dislocation creep and diffusion creep components, each of which flows the experimental flow laws. The geothermal gradient is assumed to be 25K/km. We have tested different total velocity on the model. For intraplate fault, the total velocity is less than 1mm/yr, and for comparison, we use 30mm/yr for interplate faults. Results show that at a low slip rate condition, dislocation creep dominates in the shear zone near the intraplate fault's deeper extension while diffusion creep dominates outside the shear zone. This result is different from the case of interplate faults, where dislocation creep dominates the whole region. Because of the power law effect of dislocation creep, the effective viscosity in the shear zone under intraplate faults is much higher than that under the interplate fault, therefore, shear zone under intraplate faults will have a much higher viscosity and lower shear stress than the intraplate fault. Viscosity contract between inside and outside of the shear zone is smaller under an intraplate situation than in the interplate one, and smaller viscosity difference will result in a wider shear zone.
NASA Astrophysics Data System (ADS)
Scuderi, M. M.; Collettini, C.; Marone, C.
2017-11-01
It is widely recognized that the significant increase of M > 3.0 earthquakes in Western Canada and the Central United States is related to underground fluid injection. Following injection, fluid overpressure lubricates the fault and reduces the effective normal stress that holds the fault in place, promoting slip. Although, this basic physical mechanism for earthquake triggering and fault slip is well understood, there are many open questions related to induced seismicity. Models of earthquake nucleation based on rate- and state-friction predict that fluid overpressure should stabilize fault slip rather than trigger earthquakes. To address this controversy, we conducted laboratory creep experiments to monitor fault slip evolution at constant shear stress while the effective normal stress was systematically reduced via increasing fluid pressure. We sheared layers of carbonate-bearing fault gouge in a double direct shear configuration within a true-triaxial pressure vessel. We show that fault slip evolution is controlled by the stress state acting on the fault and that fluid pressurization can trigger dynamic instability even in cases of rate strengthening friction, which should favor aseismic creep. During fluid pressurization, when shear and effective normal stresses reach the failure condition, accelerated creep occurs in association with fault dilation; further pressurization leads to an exponential acceleration with fault compaction and slip localization. Our work indicates that fault weakening induced by fluid pressurization can overcome rate strengthening friction resulting in fast acceleration and earthquake slip. Our work points to modifications of the standard model for earthquake nucleation to account for the effect of fluid overpressure and to accurately predict the seismic risk associated with fluid injection.
Coupled multiphase flow and geomechanics analysis of the 2011 Lorca earthquake
NASA Astrophysics Data System (ADS)
Jha, B.; Hager, B. H.; Juanes, R.; Bechor, N.
2013-12-01
We present a new approach for modeling coupled multiphase flow and geomechanics of faulted reservoirs. We couple a flow simulator with a mechanics simulator using the unconditionally stable fixed-stress sequential solution scheme [Kim et al, 2011]. We model faults as surfaces of discontinuity using interface elements [Aagaard et al, 2008]. This allows us to model stick-slip behavior on the fault surface for dynamically evolving fault strength. We employ a rigorous formulation of nonlinear multiphase geomechanics [Coussy, 1995], which is based on the increment in mass of fluid phases instead of the traditional, and less accurate, scheme based on the change in porosity. Our nonlinear formulation is capable of handling strong capillarity and large changes in saturation in the reservoir. To account for the effect of surface stresses along fluid-fluid interfaces, we use the equivalent pore pressure in the definition of the multiphase effective stress [Coussy et al, 1998; Kim et al, 2013]. We use our simulation tool to study the 2011 Lorca earthquake [Gonzalez et al, 2012], which has received much attention because of its potential anthropogenic triggering (long-term groundwater withdrawal leading to slip along the regional Alhama de Murcia fault). Our coupled fluid flow and geomechanics approach to model fault slip allowed us to take a fresh look at this seismic event, which to date has only been analyzed using simple elastic dislocation models and point source solutions. Using a three-dimensional model of the Lorca region, we simulate the groundwater withdrawal and subsequent unloading of the basin over the period of interest (1960-2010). We find that groundwater withdrawal leads to unloading of the crust and changes in the stress across the impermeable fault plane. Our analysis suggests that the combination of these two factors played a critical role in inducing the fault slip that ultimately led to the Lorca earthquake. Aagaard, B. T., M. G. Knepley, and C. A. Williams (2013), Journal of Geophysical Research, Solid Earth, 118, 3059-3079 Coussy, O. (1995), Mechanics of Porous Continua, John Wiley and Sons, England. Coussy, O., R. Eymard, and T. Lassabatere (1998), J. Eng. Mech., 124(6), 658-557. Kim, J., H. A. Tchelepi, and R. Juanes (2011), Comput. Methods Appl. Mech. Eng., 200, 1591-1606. Gonzalez, P. J., K. F. Tiampo, M. Palano, F. Cannavo, and J. Fernandez (2012), Nature Geoscience.
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
NASA Astrophysics Data System (ADS)
Kaneko, Y.; Francois-Holden, C.; Hamling, I. J.; D'Anastasio, E.; Fry, B.
2017-12-01
The 2016 M7.8 Kaikōura (New Zealand) earthquake generated ground motions over 1g across a 200-km long region, resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution during the Kaikōura earthquake multiple kinematic modelling methods based on local strong-motion and high-rate GPS data. Our kinematic models constrained by near-source data capture, in detail, a complex pattern of slowly (Vr < 2km/s) propagating rupture from the south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 seconds after the origin time. Interestingly, both models indicate rupture re-activation on the Kekerengu fault with the time separation of 11 seconds. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
Interplay of plate convergence and arc migration in the central Mediterranean (Sicily and Calabria)
NASA Astrophysics Data System (ADS)
Nijholt, Nicolai; Govers, Rob; Wortel, Rinus
2016-04-01
Key components in the current geodynamic setting of the central Mediterranean are continuous, slow Africa-Eurasia plate convergence (~5 mm/yr) and arc migration. This combination encompasses roll-back, tearing and detachment of slabs, and leads to back-arc opening and orogeny. Since ~30 Ma the Apennnines-Calabrian and Gibraltar subduction zones have shaped the western-central Mediterranean region. Lithospheric tearing near slab edges and the accompanying surface expressions (STEP faults) are key in explaining surface dynamics as observed in geologic, geophysical and geodetic data. In the central Mediterranean, both the narrow Calabrian subduction zone and the Sicily-Tyrrhenian offshore thrust front show convergence, with a transfer (shear) zone connecting the distinct SW edge of the former with the less distinct, eastern limit of the latter (similar, albeit on a smaller scale, to the situation in New Zealand with oppositely verging subduction zones and the Alpine fault as the transfer shear zone). The ~NNW-SSE oriented transfer zone (Aeolian-Sisifo-Tindari(-Ionian) fault system) shows transtensive-to-strike slip motion. Recent seismicity, geological data and GPS vectors in the central Mediterranean indicate that the region can be subdivided into several distinct domains, both on- and offshore, delineated by deformation zones and faults. However, there is discussion about the (relative) importance of some of these faults on the lithospheric scale. We focus on finding the best-fitting assembly of faults for the transfer zone connecting subduction beneath Calabria and convergence north of Sicily in the Sicily-Tyrrhenian offshore thrust front. This includes determining whether the Alfeo-Etna fault, Malta Escarpment and/or Ionian fault, which have all been suggested to represent the STEP fault of the Calabrian subduction zone, are key in describing the observed deformation patterns. We first focus on the present-day. We use geodynamic models to reproduce observed GPS velocities in the Sicily-Calabria region. In these models, we combine far-field velocity boundary conditions, GPE-related body forces, and slab pull/trench suction at the subduction contacts. The location and nature of model faults are based on geological and seismicity observations, and as these faults do not fully enclose blocks our models require both fault slip and distributed strain. We vary fault friction in the models. Extrapolating the (short term) model results to geological time scales, we are able to make a first-order assessment of the regional strain and block rotations resulting from the interplay of arc migration and plate convergence during the evolution of this complex region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Braun, J.
Refrigerant mass flow rate is an important measurement for monitoring equipment performance and enabling fault detection and diagnostics. However, a traditional mass flow meter is expensive to purchase and install. A virtual refrigerant mass flow sensor (VRMF) uses a mathematical model to estimate flow rate using low-cost measurements and can potentially be implemented at low cost. This study evaluates three VRMFs for estimating refrigerant mass flow rate. The first model uses a compressor map that relates refrigerant flow rate to measurements of inlet and outlet pressure, and inlet temperature measurements. The second model uses an energy-balance method on the compressormore » that uses a compressor map for power consumption, which is relatively independent of compressor faults that influence mass flow rate. The third model is developed using an empirical correlation for an electronic expansion valve (EEV) based on an orifice equation. The three VRMFs are shown to work well in estimating refrigerant mass flow rate for various systems under fault-free conditions with less than 5% RMS error. Each of the three mass flow rate estimates can be utilized to diagnose and track the following faults: 1) loss of compressor performance, 2) fouled condenser or evaporator filter, 3) faulty expansion device, respectively. For example, a compressor refrigerant flow map model only provides an accurate estimation when the compressor operates normally. When a compressor is not delivering the expected flow due to a leaky suction or discharge valve or other internal fault, the energy-balance or EEV model can provide accurate flow estimates. In this paper, the flow differences provide an indication of loss of compressor performance and can be used for fault detection and diagnostics.« less
NASA Astrophysics Data System (ADS)
Kordilla, J.; Terrell, A. N.; Veltri, M.; Sauter, M.; Schmidt, S.
2017-12-01
In this study we model saturated and unsaturated flow in the karstified Weendespring catchment, located within the Leinetal graben in Goettingen, Germany. We employ the finite element COMSOL Multiphysics modeling software to model variably saturated flow using the Richards equation with a van Genuchten type parameterization. As part of the graben structure, the Weende spring catchment is intersected by seven fault zones along the main flow path of the 7400 m cross section of the catchment. As the Weende spring is part of the drinking water supply in Goettingen, it is particularly important to understand the vulnerability of the catchment and effect of fault zones on rapid transport of contaminants. Nitrate signals have been observed at the spring only a few days after the application of fertilizers within the catchment at a distance of approximately 2km. As the underlying layers are known to be highly impermeable, fault zones within the area are likely to create rapid flow paths to the water table and the spring. The model conceptualizes the catchment as containing three hydrogeological limestone units with varying degrees of karstification: the lower Muschelkalk limestone as a highly conductive layer, the middle Muschelkalk as an aquitard, and the upper Muschelkalk as another conductive layer. The fault zones are parameterized based on a combination of field data from quarries, remote sensing and literary data. The fault zone is modeled considering the fracture core as well as the surrounding damage zone with separate, specific hydraulic properties. The 2D conceptual model was implemented in COMSOL to study unsaturated flow at the catchment scale using van Genuchten parameters. The study demonstrates the importance of fault zones for preferential flow within the catchment and its effect on the spatial distribution of vulnerability.
Coupled Multi-physics analysis of Caprock Integrity and Fault Reactivation during CO2 Sequestration*
NASA Astrophysics Data System (ADS)
Newell, P.; Martinez, M. J.; Bishop, J.
2012-12-01
Structural/stratigraphic trapping beneath a low-permeable caprock layer is the primary trapping mechanism for long-term subsurface sequestration of CO2. Pre-existing fracture networks, injection induced fractures, and faults are of concern for possible CO2 leakage both during and after injection. In this work we model the effects of both caprock jointing and a fault on the caprock sealing integrity during various injection scenarios. The modeling effort uses a three-dimensional finite-element based coupled multiphase flow and geomechanics simulator. The joints within the caprock are idealized as equally spaced and parallel. Both the mechanical and flow behavior of the joint network are treated within an effective continuum formulation. The mechanical behavior of the joint network is linear elastic in shear and nonlinear elastic in the normal direction. The flow behavior of the joint network is treated using the classical cubic-law relating flow rate and aperture. The flow behavior is then upscaled to obtain an effective permeability. The fault is modeled as a finite-thickness layer with multiple joint sets. The joint sets within the fault region are modeled following the same mechanical and flow formulation as the joints within the caprock. Various injection schedules as well as fault and caprock jointing configurations within a proto-typical sequestration site have been investigated. The resulting leakage rates through the caprock and fault are compared to those assuming intact material. The predicted leakage rates are a strong nonlinear function of the injection rate. *This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energys National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Learning from physics-based earthquake simulators: a minimal approach
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2017-04-01
Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.
NASA Astrophysics Data System (ADS)
Maggi, Matteo; Cianfarra, Paola; Salvini, Francesco
2013-04-01
Faults have a (brittle) deformation zone that can be described as the presence of two distintive zones: an internal Fault core (FC) and an external Fault Damage Zone (FDZ). The FC is characterized by grinding processes that comminute the rock grains to a final grain-size distribution characterized by the prevalence of smaller grains over larger, represented by high fractal dimensions (up to 3.4). On the other hand, the FDZ is characterized by a network of fracture sets with characteristic attitudes (i.e. Riedel cleavages). This deformation pattern has important consequences on rock permeability. FC often represents hydraulic barriers, while FDZ, with its fracture connection, represents zones of higher permability. The observation of faults revealed that dimension and characteristics of FC and FDZ varies both in intensity and dimensions along them. One of the controlling factor in FC and FDZ development is the fault plane geometry. By changing its attitude, fault plane geometry locally alter the stress component produced by the fault kinematics and its combination with the bulk boundary conditions (regional stress field, fluid pressure, rocks rheology) is responsible for the development of zones of higher and lower fracture intensity with variable extension along the fault planes. Furthermore, the displacement along faults provides a cumulative deformation pattern that varies through time. The modeling of the fault evolution through time (4D modeling) is therefore required to fully describe the fracturing and therefore permeability. In this presentation we show a methodology developed to predict distribution of fracture intensity integrating seismic data and numerical modeling. Fault geometry is carefully reconstructed by interpolating stick lines from interpreted seismic sections converted to depth. The modeling is based on a mixed numerical/analytical method. Fault surface is discretized into cells with their geometric and rheological characteristics. For each cell, the acting stress and strength are computed by analytical laws (Coulomb failure). Total brittle deformation for each cell is then computed by cumulating the brittle failure values along the path of each cell belonging to one side onto the facing one. The brittle failure value is provided by the DF function, that is the difference between the computed shear and the strength of the cell at each step along its path by using the Frap in-house developed software. The width of the FC and the FDZ are computed as a function of the DF distribution and displacement around the fault. This methodology has been successfully applied to model the brittle deformation pattern of the Vignanotica normal fault (Gargano, Southern Italy) where fracture intensity is expressed by the dimensionless H/S ratio representing the ratio between the dimension and the spacing of homologous fracture sets (i.e., group of parallel fractures that can be ascribed to the same event/stage/stress field).
The Hills are Alive: Dynamic Ridges and Valleys in a Strike-Slip Environment
NASA Astrophysics Data System (ADS)
Duvall, A. R.; Tucker, G. E.
2014-12-01
Strike-slip fault zones have long been known for characteristic landforms such as offset and deflected rivers, linear strike-parallel valleys, and shutter ridges. Despite their common presence, questions remain about the mechanics of how these landforms arise or how their form varies as a function of slip rate, geomorphic process, or material properties. We know even less about what happens far from the fault, in drainage basin headwaters, as a result of strike-slip motion. Here we explore the effects of horizontal fault slip rate, bedrock erodibility, and hillslope diffusivity on river catchments that drain across an active strike-slip fault using the CHILD landscape evolution model. Model calculations demonstrate that lateral fault motion induces a permanent state of landscape disequilibrium brought about by fault offset-generated river lengthening alternating with abrupt shortening due to stream capture. This cycle of shifting drainage patterns and base level change continues until fault motion ceases thus creating a perpetual state of transience unique to strike-slip systems. Our models also make the surprising prediction that, in some cases, hillslope ridges oriented perpendicular to the fault migrate laterally in conjunction with fault motion. Ridge migration happens when slip rate is slow enough and/or diffusion and river incision are fast enough that the hillslopes can respond to the disequilibrium brought about by strike-slip motion. In models with faster slip rates, stronger rocks or less-diffusive hillslopes, ridge mobility is limited or arrested despite the fact that the process of river lengthening and capture continues. Fast-slip cases also develop prominent steep fault-facing hillslope facets proximal to the fault valley and along-strike topographic profiles with reduced local relief between ridges and valleys. Our results demonstrate the dynamic nature of strike-slip landscapes that vary systematically with a ratio of bedrock erodibility (K) and hillslope diffusivity (D) to the rate of horizontal advection of topography (v). These results also reveal a potential set of recognizable geomorphic signatures within strike-slip systems that should be looked to as indicators of fault activity and/or material properties.
Toward a Model-Based Approach to Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Murray, Alex; Meakin, Peter
2012-01-01
Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.
Pulverization provides a mechanism for the nucleation of earthquakes at low stress on strong faults
Felzer, Karen R.
2014-01-01
An earthquake occurs when rock that has been deformed under stress rebounds elastically along a fault plane (Gilbert, 1884; Reid, 1911), radiating seismic waves through the surrounding earth. Rupture along the entire fault surface does not spontaneously occur at the same time, however. Rather the rupture starts in one tiny area, the rupture nucleation zone, and spreads sequentially along the fault. Like a row of dominoes, one bit of rebounding fault triggers the next. This triggering is understood to occur because of the large dynamic stresses at the tip of an active seismic rupture. The importance of these crack tip stresses is a central question in earthquake physics. The crack tip stresses are minimally important, for example, in the time predictable earthquake model (Shimazaki and Nakata, 1980), which holds that prior to rupture stresses are comparable to fault strength in many locations on the future rupture plane, with bits of variation. The stress/strength ratio is highest at some point, which is where the earthquake nucleates. This model does not require any special conditions or processes at the nucleation site; the whole fault is essentially ready for rupture at the same time. The fault tip stresses ensure that the rupture occurs as a single rapid earthquake, but the fact that fault tip stresses are high is not particularly relevant since the stress at most points does not need to be raised by much. Under this model it should technically be possible to forecast earthquakes based on the stress-renewaql concept, or estimates of when the fault as a whole will reach the critical stress level, a practice used in official hazard mapping (Field, 2008). This model also indicates that physical precursors may be present and detectable, since stresses are unusually high over a significant area before a large earthquake.
NASA Astrophysics Data System (ADS)
Ward, L. A.; Smith-Konter, B. R.; Higa, J. T.; Xu, X.; Tong, X.; Sandwell, D. T.
2017-12-01
After over a decade of operation, the EarthScope (GAGE) Facility has now accumulated a wealth of GPS and InSAR data, that when successfully integrated, make it possible to image the entire San Andreas Fault System (SAFS) with unprecedented spatial coverage and resolution. Resulting surface velocity and deformation time series products provide critical boundary conditions needed for improving our understanding of how faults are loaded across a broad range of temporal and spatial scales. Moreover, our understanding of how earthquake cycle deformation is influenced by fault zone strength and crust/mantle rheology is still developing. To further study these processes, we construct a new 4D earthquake cycle model of the SAFS representing the time-dependent 3D velocity field associated with interseismic strain accumulation, co-seismic slip, and postseismic viscoelastic relaxation. This high-resolution California statewide model, spanning the Cerro Prieto fault to the south to the Maacama fault to the north, is constructed on a 500 m spaced grid and comprises variable slip and locking depths along 42 major fault segments. Secular deep slip is prescribed from the base of the locked zone to the base of the elastic plate while episodic shallow slip is prescribed from the historical earthquake record and geologic recurrence intervals. Locking depths and slip rates for all 42 fault segments are constrained by the newest GAGE Facility geodetic observations; 3169 horizontal GPS velocity measurements, combined with over 53,000 line-of-sight (LOS) InSAR velocity observations from Sentinel-1A, are used in a weighted least-squares inversion. To assess slip rate and locking depth sensitivity of a heterogeneous rheology model, we also implement variations in crustal rigidity throughout the plate boundary, assuming a coarse representation of shear modulus variability ranging from 20-40 GPa throughout the (low rigidity) Salton Trough and Basin and Range and the (high rigidity) Central Valley and ocean lithosphere.
Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2003-10-01
A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Astrophysics Data System (ADS)
Glass, B. J.
1992-10-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
NASA Astrophysics Data System (ADS)
Polun, S. G.; Stockman, M. B.; Hickcox, K.; Horrell, D.; Tesfaye, S.; Gomez, F. G.
2015-12-01
As the only subaerial exposure of a ridge - ridge - ridge triple junction, the Afar region of Ethiopia and Djibouti offers a rare opportunity to assess strain partitioning within this type of triple junction. Here, the plate boundaries do not link discretely, but rather the East African rift meets the Red Sea and Gulf of Aden rifts in a zone of diffuse normal faulting characterized by a lack of magmatic activity, referred to as the central Afar. An initial assessment of Late Quaternary strain partitioning is based on faulted landforms in the Dobe - Hanle graben system in Ethiopia and Djibouti. These two extensional basins are connected by an imbricated accommodation zone. Several fault scarps occur within terraces formed during the last highstand of Lake Dobe, around 5 ka - they provide a means of calibrating a numerical model of fault scarp degradation. Additional timing constraints will be provided by pending exposure ages. The spreading rates of both grabens are equivalent, however in Dobe graben, extension is partitioned 2:1 between northern, south dipping faults and the southern, north dipping fault. Extension in Hanle graben is primarily focused on the north dipping Hanle fault. On the north margin of Dobe graben, the boundary fault bifurcates, where the basin-bordering fault displays a significantly higher modeled uplift rate than the more distal fault, suggesting a basinward propagation of faulting. On the southern Dobe fault, surveyed fault scarps have ages ranging from 30 - 5 ka with uplift rates of 0.71, 0.47, and 0.68 mm/yr, suggesting no secular variation in slip rates from the late Plestocene through the Holocene. These rates are converted into horizontal stretching estimates, which are compared with regional strain estimated from velocities of relatively sparse GPS data.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1992-01-01
Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
Plate tectonics and crustal deformation around the Japanese Islands
NASA Technical Reports Server (NTRS)
Hashimoto, Manabu; Jackson, David D.
1993-01-01
We analyze over a century of geodetic data to study crustal deformation and plate motion around the Japanese Islands, using the block-fault model for crustal deformation developed by Matsu'ura et al. (1986). We model the area including the Japanese Islands with 19 crustal blocks and 104 faults based on the distribution of active faults and seismicity. Geodetic data are used to obtain block motions and average slip rates of faults. This geodetic model predicts that the Pacific plate moves N deg 69 +/- 2 deg W at about 80 +/- 3 mm/yr relative to the Eurasian plate which is much lower than that predicted in geologic models. Substantial aseismic slip occurs on the subduction boundaries. The block containing the Izu Peninsula may be separated from the rigid part of the Philippine Sea plate. The faults on the coast of Japan Sea and the western part of the Median Tectonic Line have slip rates exceeding 4 mm/yr, while the Fossa Magna does not play an important role in the tectonics of the central Japan. The geodetic model requires the division of northeastern Japan, contrary to the hypothesis that northeastern Japan is a part of the North American plate. Owing to rapid convergence, the seismic risk in the Nankai trough may be larger than that of the Tokai gap.
NASA Astrophysics Data System (ADS)
Gomez, F.; Jaafar, R.; Abdallah, C.; Karam, G.
2012-12-01
The Lebanese Restraining Bend (LRB) is a ~200-km-long bend in the central part of the Dead Sea Fault system (DSFS). As with other large restraining bends, this part of the transform is characterized by more complicated structure than other parts. Additionally, results from recent GPS studies have documented slower velocities north of the LRB than are observed along the southern DSFS to the south. In an effort to understand how strain is transferred through the LRB, this study analyzes improved GPS velocities within the central DSFS based on new data and additional stations. Despite relatively modest rates of seismicity, the Dead Sea Fault system (DSFS) has a historically documented record of producing large and devastating earthquakes. Hence, geodetic measurements of crustal deformation may provide key constraints on processes of strain accumulation that may not be evident in instrumentally recorded seismicity. Within the LRB, the transform splays into two prominent strike-slip faults: The through-going Yammouneh fault and the Serghaya fault. The latter appears to terminate in the Anti-Lebanon Mountains. Additionally, some oblique plate motion is accommodated by thrusting along the coast of Lebanon. This study used GPS observations from survey-mode GPS sites, as well as continuous GPS stations in the region. In total, 22 GPS survey sites have been measured in Lebanon between 2002 and 2010, along with GPS data from the adjacent area. Elastic models are used for initial assessment of fault slip rates. Incorporating two major strike-slip faults, as well as an offshore thrust fault, this modeling suggests left-lateral slip rates of 3.8 mm/yr and 1.1 mm/yr for the Yammouneh and Serghaya faults, respectively. The GPS survey network has sufficient density for analyzing velocity gradients in an effort to quantify tectonic strains and rotations. The velocity gradients suggest that differential rotations play a role in accommodating some plate motion.
NASA Astrophysics Data System (ADS)
Allison, K. L.; Dunham, E. M.
2017-12-01
We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the effective viscosity. The resulting temperature changes may reduce the width of the shear zone in the lower crust and upper mantle, and reduce the effective viscosity.
An empirically based steady state friction law and implications for fault stability
Nielsen, S.; Violay, M.; Di Toro, G.
2016-01-01
Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest. PMID:27667875
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
NASA Astrophysics Data System (ADS)
Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.
2012-12-01
We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is approximately equal to that to the east. The ABF has varying kinematics along strike due to changes in trend of the fault with respect to the nearly east-trending displacement vector of the Ensenada Block to the north of the fault relative to a stable Baja Microplate to the south. These kinematics include nearly pure strike slip in the central portion of the ABF where the fault trends nearly E-W, and minor components of normal dip-slip motion on the NABF and eastern sections of the fault where the trends become more northerly. A pixel translation vector parallel to the trend of the ABF in the central segment (290 deg, 10.5 km) produces kinematics consistent with those described above. The block between the NABF and STF has a pixel translation vector parallel the STF (291 deg, 3.5 km). We find these vectors are consistent with the kinematic variability of the fault system and realign several major drainages and ridges across the fault. This suggests these features formed prior to faulting, and they yield preferred values of offset: 10.5 km on the ABF, 7 km on the NABF and 3.5 km on the STF. This model is consistent with the kinematic model proposed by Hamilton (1971) in which the ABF is a transform fault, linking extensional regions of Valle San Felipe and the Continental Borderlands.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A new fault diagnosis algorithm for AUV cooperative localization system
NASA Astrophysics Data System (ADS)
Shi, Hongyang; Miao, Zhiyong; Zhang, Yi
2017-10-01
Multiple AUVs cooperative localization as a new kind of underwater positioning technology, not only can improve the positioning accuracy, but also has many advantages the single AUV does not have. It is necessary to detect and isolate the fault to increase the reliability and availability of the AUVs cooperative localization system. In this paper, the Extended Multiple Model Adaptive Cubature Kalmam Filter (EMMACKF) method is presented to detect the fault. The sensor failures are simulated based on the off-line experimental data. Experimental results have shown that the faulty apparatus can be diagnosed effectively using the proposed method. Compared with Multiple Model Adaptive Extended Kalman Filter and Multi-Model Adaptive Unscented Kalman Filter, both accuracy and timelines have been improved to some extent.
NASA Astrophysics Data System (ADS)
Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan
2015-04-01
The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.
LiDAR-Assisted identification of an active fault near Truckee, California
Hunter, L.E.; Howle, J.F.; Rose, R.S.; Bawden, G.W.
2011-01-01
We use high-resolution (1.5-2.4 points/m2) bare-earth airborne Light Detection and Ranging (LiDAR) imagery to identify, map, constrain, and visualize fault-related geomorphology in densely vegetated terrain surrounding Martis Creek Dam near Truckee, California. Bare-earth LiDAR imagery reveals a previously unrecognized and apparently youthful right-lateral strike-slip fault that exhibits laterally continuous tectonic geomorphic features over a 35-km-long zone. If these interpretations are correct, the fault, herein named the Polaris fault, may represent a significant seismic hazard to the greater Truckee-Lake Tahoe and Reno-Carson City regions. Three-dimensional modeling of an offset late Quaternary terrace riser indicates a minimum tectonic slip rate of 0.4 ?? 0.1 mm/yr.Mapped fault patterns are fairly typical of regional patterns elsewhere in the northern Walker Lane and are in strong coherence with moderate magnitude historical seismicity of the immediate area, as well as the current regional stress regime. Based on a range of surface-rupture lengths and depths to the base of the seismogenic zone, we estimate a maximum earthquake magnitude (M) for the Polaris fault to be between 6.4 and 6.9.
Fault Diagnosis in HVAC Chillers
NASA Technical Reports Server (NTRS)
Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann
2005-01-01
Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.
Modelling Fault Zone Evolution: Implications for fluid flow.
NASA Astrophysics Data System (ADS)
Moir, H.; Lunn, R. J.; Shipton, Z. K.
2009-04-01
Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.
The study of active tectonic based on hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Cui, J.; Zhang, S.; Zhang, J.; Shen, X.; Ding, R.; Xu, S.
2017-12-01
As of the latest technical methods, hyperspectral remote sensing technology has been widely used in each brach of the geosciences. However, it is still a blank for using the hyperspectral remote sensing to study the active structrure. Hyperspectral remote sensing, with high spectral resolution, continuous spectrum, continuous spatial data, low cost, etc, has great potentialities in the areas of stratum division and fault identification. Blind fault identification in plains and invisible fault discrimination in loess strata are the two hot problems in the current active fault research. Thus, the study of active fault based on the hyperspectral technology has great theoretical significance and practical value. Magnetic susceptibility (MS) records could reflect the rhythm alteration of the formation. Previous study shown that MS has correlation with spectral feature. In this study, the Emaokou section, located to the northwest of the town of Huairen, in Shanxi Province, has been chosen for invisible fault study. We collected data from the Emaokou section, including spectral data, hyperspectral image, MS data. MS models based on spectral features were established and applied to the UHD185 image for MS mapping. The results shown that MS map corresponded well to the loess sequences. It can recognize the stratum which can not identity by naked eyes. Invisible fault has been found in this section, which is useful for paleoearthquake analysis. The faults act as the conduit for migration of terrestrial gases, the fault zones, especially the structurally weak zones such as inrtersections or bends of fault, may has different material composition. We take Xiadian fault for study. Several samples cross-fault were collected and these samples were measured by ASD Field Spec 3 spectrometer. Spectral classification method has been used for spectral analysis, we found that the spectrum of the fault zone have four special spectral region(550-580nm, 600-700nm, 700-800nm and 800-900nm), which different with the spectrum of the none-fault zone. It could help us welly located the fault zone. The located result correspond well to the physical prospecting method result. The above study shown that Hypersepctral remote sensing technology provide a new method for active study.
NASA Astrophysics Data System (ADS)
Zhuo, Yan-Qun; Ma, Jin; Guo, Yan-Shuang; Ji, Yun-Tao
In stick-slip experiments modeling the occurrence of earthquakes, the meta-instability stage (MIS) is the process that occurs between the peak differential stress and the onset of sudden stress drop. The MIS is the final stage before a fault becomes unstable. Thus, identification of the MIS can help to assess the proximity of the fault to the earthquake critical time. A series of stick-slip experiments on a simulated strike-slip fault were conducted using a biaxial servo-controlled press machine. Digital images of the sample surface were obtained via a high speed camera and processed using a digital image correlation method for analysis of the fault displacement field. Two parameters, A and S, are defined based on fault displacement. A, the normalized length of local pre-slip areas identified by the strike-slip component of fault displacement, is the ratio of the total length of the local pre-slip areas to the length of the fault within the observed areas and quantifies the growth of local unstable areas along the fault. S, the normalized entropy of fault displacement directions, is derived from Shannon entropy and quantifies the disorder of fault displacement directions along the fault. Based on the fault displacement field of three stick-slip events under different loading rates, the experimental results show the following: (1) Both A and S can be expressed as power functions of the normalized time during the non-linearity stage and the MIS. The peak curvatures of A and S represent the onsets of the distinct increase of A and the distinct reduction of S, respectively. (2) During each stick-slip event, the fault evolves into the MIS soon after the curvatures of both A and S reach their peak values, which indicates that the MIS is a synergetic process from independent to cooperative behavior among various parts of a fault and can be approximately identified via the peak curvatures of A and S. A possible application of these experimental results to field conditions is provided. However, further validation is required via additional experiments and exercises.
Diagnosis of delay-deadline failures in real time discrete event models.
Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha
2007-10-01
In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Ring faults and ring dikes around the Orientale basin on the Moon.
Andrews-Hanna, Jeffrey C; Head, James W; Johnson, Brandon; Keane, James T; Kiefer, Walter S; McGovern, Patrick J; Neumann, Gregory A; Wieczorek, Mark A; Zuber, Maria T
2018-08-01
The Orientale basin is the youngest and best-preserved multiring impact basin on the Moon, having experienced only modest modification by subsequent impacts and volcanism. Orientale is often treated as the type example of a multiring basin, with three prominent rings outside of the inner depression: the Inner Rook Montes, the Outer Rook Montes, and the Cordillera. Here we use gravity data from NASA's Gravity Recovery and Interior Laboratory (GRAIL) mission to reveal the subsurface structure of Orientale and its ring system. Gradients of the gravity data reveal a continuous ring dike intruded into the Outer Rook along the plane of the fault associated with the ring scarp. The volume of this ring dike is ~18 times greater than the volume of all extrusive mare deposits associated with the basin. The gravity gradient signature of the Cordillera ring indicates an offset along the fault across a shallow density interface, interpreted to be the base of the low-density ejecta blanket. Both gravity gradients and crustal thickness models indicate that the edge of the central cavity is shifted inward relative to the equivalent Inner Rook ring at the surface. Models of the deep basin structure show inflections along the crust-mantle interface at both the Outer Rook and Cordillera rings, indicating that the basin ring faults extend from the surface to at least the base of the crust. Fault dips range from 13-22° for the Cordillera fault in the northeastern quadrant, to 90° for the Outer Rook in the northwestern quadrant. The fault dips for both outer rings are lowest in the northeast, possibly due to the effects of either the direction of projectile motion or regional gradients in pre-impact crustal thickness. Similar ring dikes and ring faults are observed around the majority of lunar basins.
NASA Astrophysics Data System (ADS)
Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup
2016-06-01
Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
NASA Astrophysics Data System (ADS)
Giletycz, Slawomir Jack; Chang, Chung-Pai; Lin, Andrew Tien-Shun; Ching, Kuo-En; Shyu, J. Bruce H.
2017-11-01
The fault systems of Taiwan have been repeatedly studied over many decades. Still, new surveys consistently bring fresh insights into their mechanisms, activity and geological characteristics. The neotectonic map of Taiwan is under constant development. Although the most active areas manifest at the on-land boundary of the Philippine Sea Plate and Eurasia (a suture zone known as the Longitudinal Valley), and at the southwestern area of the Western Foothills, the fault systems affect the entire island. The Hengchun Peninsula represents the most recently emerged part of the Taiwan orogen. This narrow 20-25 km peninsula appears relatively aseismic. However, at the western flank the peninsula manifests tectonic activity along the Hengchun Fault. In this study, we surveyed the tectonic characteristics of the Hengchun Fault. Based on fieldwork, four years of monitoring fault displacement in conjunction with levelling data, core analysis, UAV surveys and mapping, we have re-evaluated the fault mechanisms as well as the geological formations of the hanging and footwall. We surveyed features that allowed us to modify the existing model of the fault in two ways: 1) correcting the location of the fault line in the southern area of the peninsula by moving it westwards about 800 m; 2) defining the lithostratigraphy of the hanging and footwall of the fault. A bathymetric map of the southern area of the Hengchun Peninsula obtained from the Atomic Energy Council that extends the fault trace offshore to the south distinctively matches our proposed fault line. These insights, coupled with crust-scale tomographic data from across the Manila accretionary system, form the basis of our opinion that the Hengchun Fault may play a major role in the tectonic evolution of the southern part of the Taiwan orogen.
"The Big One" in Taipei: Numerical Simulation Study of the Sanchiao Fault Earthquake Scenarios
NASA Astrophysics Data System (ADS)
Wang, Y.; Lee, S.; Ng, S.
2012-12-01
Sanchiao fault is a western boundary fault of the Taipei basin located in northern Taiwan, close to the densely populated Taipei metropolitan area. According to the report of Central Geological Survey, the terrestrial portion of the Sanchiao fault can be divided into north and south segments. The south segment is about 13 km and north segment is about 21 km. Recent study demonstrated that there are about 40 km of the fault trace that extended to the marine area offshore of northern Taiwan. Combined with the marine and terrestrial parts, the total fault length of Sanchiao fault could be nearly 70 kilometers. Based on the recipe proposed by IRIKURA and Miyake (2010), we estimate the Sanchiao fault has the potential to produce an earthquake with moment magnitude larger than Mw 7.2. The total area of fault rupture is about 1323 km2, asperity to the total fault plane is 22%, and the slips of the asperity and background are 2.8 m and 1.6 m respectively. Use the characteristic source model based on this assumption, the 3D spectral-element method simulation results indicate that Peak ground acceleration (PGA) is significantly stronger along the surface fault-rupture. The basin effects play an important role when wave propagates in the Taipei basin which cause seismic wave amplified and prolong the shaking for a very long time. It is worth noting that, when the rupture starts from the southern tip of the fault, i.e. the hypocenter locates in the basin, the impact of the Sanchiao fault earthquake to the Taipei metropolitan area will be the most serious. The strong shaking can cover the entire Taipei city, and even across the basin that extended to eastern-most part of northern Taiwan.
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
Toushmalani, Reza
2013-01-01
The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.
NASA Astrophysics Data System (ADS)
Lienkaemper, James J.; Williams, Patrick L.
1999-07-01
WGCEP90 estimated the Hayward fault to have a high probability (0.45 in 30 yr) of producing a future M7 Bay Area earthquake. This was based on a generic recurrence time and an unverified segmentation model, because there were few direct observations for the southern fault and none for the northern Hayward fault. To better constrain recurrence and segmentation of the northern Hayward fault, we trenched in north Oakland. Unexpectedly, we observed evidence of surface rupture probably from the M7 1868 earthquake. This extends the limit of that surface rupture 13 km north of the segmentation boundary used in the WGCEP90 model and forces serious re-evaluation of the current two-segment paradigm. Although we found that major prehistoric ruptures have occurred here, we could not radiocarbon date them. However, the last major prehistoric event appears correlative with a recently recognized event 13 km to the north dated AD 1640-1776.
Lienkaemper, J.J.; Williams, P.L.
1999-01-01
WGCEP90 estimated the Hayward fault to have a high probability (0.45 in 30 yr) of producing a future M7 Bay Area earthquake. This was based on a generic recurrence time and an unverified segmentation model, because there were few direct observations for the southern fault and none for the northern Hayward fault. To better constrain recurrence and segmentation of the northern Hayward fault, we trenched in north Oakland. Unexpectedly, we observed evidence of surface rupture probably from the M7 1868 earthquake. This extends the limit of that surface rupture 13 km north of the segmentation boundary used in the WGCEP90 model and forces serious re-evaluation of the current two-segment paradigm. Although we found that major prehistoric ruptures have occurred here, we could not radiocarbon date them. However, the last major prehistoric event appears correlative with a recently recognized event 13 km to the north dated AD 1640-1776. Copyright 1999 by the American Geophysical Union.
Dynamic rupture models of subduction zone earthquakes with off-fault plasticity
NASA Astrophysics Data System (ADS)
Wollherr, S.; van Zelst, I.; Gabriel, A. A.; van Dinther, Y.; Madden, E. H.; Ulrich, T.
2017-12-01
Modeling tsunami-genesis based on purely elastic seafloor displacement typically underpredicts tsunami sizes. Dynamic rupture simulations allow to analyse whether plastic energy dissipation is a missing rheological component by capturing the complex interplay of the rupture front, emitted seismic waves and the free surface in the accretionary prism. Strike-slip models with off-fault plasticity suggest decreasing rupture speed and extensive plastic yielding mainly at shallow depths. For simplified subduction geometries inelastic deformation on the verge of Coulomb failure may enhance vertical displacement, which in turn favors the generation of large tsunamis (Ma, 2012). However, constraining appropriate initial conditions in terms of fault geometry, initial fault stress and strength remains challenging. Here, we present dynamic rupture models of subduction zones constrained by long-term seismo-thermo-mechanical modeling (STM) without any a priori assumption of regions of failure. The STM model provides self-consistent slab geometries, as well as stress and strength initial conditions which evolve in response to tectonic stresses, temperature, gravity, plasticity and pressure (van Dinther et al. 2013). Coseismic slip and coupled seismic wave propagation is modelled using the software package SeisSol (www.seissol.org), suited for complex fault zone structures and topography/bathymetry. SeisSol allows for local time-stepping, which drastically reduces the time-to-solution (Uphoff et al., 2017). This is particularly important in large-scale scenarios resolving small-scale features, such as the shallow angle between the megathrust fault and the free surface. Our dynamic rupture model uses a Drucker-Prager plastic yield criterion and accounts for thermal pressurization around the fault mimicking the effect of pore pressure changes due to frictional heating. We first analyze the influence of this rheology on rupture dynamics and tsunamigenic properties, i.e. seafloor displacement, in 2D. Finally, we use the same rheology in a large-scale 3D scenario of the 2004 Sumatra earthquake to shed light to the source process that caused the subsequent devastating tsunami.
NASA Astrophysics Data System (ADS)
Johnson, B.; Zhurina, E. N.
2001-12-01
We are developing and assessing field testing and analysis methodologies for quantitative characterization of aquifer heterogenities using data measured in an array of multilevel monitoring wells (MLW) during pumping and recovery well tests. We have developed a unique field laboratory to determine the permeability field in a 20m by 40m by 70m volume in the fault partitioned, siliciclastic Hickory aquifer system in central Texas. The site incorporates both stratigraphic variations and a normal fault system that partially offsets the aquifer and impedes cross-fault flow. We constructed a high-resolution geologic model of the site based upon 1050 m of core and a suite of geophysical logs from eleven, closely spaced (3-10m), continuously cored boreholes to depths of 125 m. Westbay multilevel monitoring systems installed in eight holes provide 94 hydraulically isolated measurement zones and 25 injection zones. A good geologic model is critical to proper installation of the MLW. Packers are positioned at all significant fault piercements and selected, laterally extensive, clay-rich strata. Packers in adjacent MLW bracket selected hydrostratigraphic intervals. Pump tests utilized two, uncased, fully penetrating irrigation wells that straddle the fault system and are in close proximity (7 to 65 m) to the MLW. Pumping and recovery transient pressure histories were measured in 85 zones using pressure transducers with a resolution of 55 Pa (0.008 psi). The hydraulic response is that of an anisotropic, unconfined aquifer. The transient pressure histories vary significantly from zone to zone in a single MLW as well as between adjacent MLW. Derivative plots are especially useful for differentiating details of pressure histories. Based on the geologic model, the derivative curve of a zone reflects its absolute vertical position, vertical stratigraphic position, and proximity to either a fault or significant stratigraphic heterogeneity. Additional forward modeling is needed to assist qualitative interpretation of response curves. Prior geologic knowledge appears critical. Quantitative interpretation of the transient pressure histories requires utilizing a numerical aquifer response model coupled with a geophysical inversion algorithm.
Common faults and their impacts for rooftop air conditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of genericmore » rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed by an FDD system. The data set obtained during this work was very comprehensive, and was used to design and evaluate the performance of an FDD method that will be reported in a future paper.« less
NASA Astrophysics Data System (ADS)
Alatorre-Zamora, Miguel Angel; Campos-Enríquez, José Oscar; Fregoso-Becerra, Emilia; Quintanar-Robles, Luis; Toscano-Fletes, Roberto; Rosas-Elguera, José
2018-03-01
The Ameca tectonic depression (ATD) is located at the NE of the Jalisco Block along the southwestern fringe of the NW-SE trending Tepic-Zacoalco Rift, in the west-central part of the Trans-Mexican Volcanic Belt, western Mexico. To characterize its shallow crustal structure, we conducted a gravity survey based on nine N-S gravity profiles across the western half of the Ameca Valley. The Bouguer residual anomalies are featured by a central low between two zones of positive gravity values with marked gravity gradients. These anomalies have a general NW-SE trend similar to the Tepic-Zacoalco Rift general trend. Basement topography along these profiles was obtained by means of: 1) a Tsuboi's type inverse modeling, and 2) forward modeling. Approximately northward dipping 10° slopes are modeled in the southern half, with south tilted down faulted blocks of the Cretaceous granitic basement and its volcano-sedimentary cover along sub-vertical and intermediate normal faults, whereas southward dipping slopes of almost 15° are observed at the northern half. According to features of the obtained models, this depression corresponds to a slight asymmetric graben. The Ameca Fault is part of the master fault system along its northern limit. The quantitative interpretation shows an approximately 500 to 1100 m thick volcano-sedimentary infill capped by alluvial products. This study has several implications concerning the limit between the Jalisco Block and the Tepic-Zacoalco Rift. The established shallow crustal structure points to the existence of a major listric fault with its detachment surface beneath the Tepic-Zacoalco Rift. The Ameca Fault is interpreted as a secondary listric fault. The models indicate the presence of granitic bodies of the Jalisco Block beneath the TMVB volcanic products of the Tepic-Zacoalco rift. This implies that the limit between these two regional structures is not simple but involves a complex transition zone. A generic model suggests that the extension related normal faulting has been operating as a mechanism in the evolution of this rift. Analysis of seismicity affecting the study area and neighborhood indicates the inferred faults are active.
Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts
Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.
2009-01-01
We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.
Seismic Hazard Analysis for Armenia and its Surrounding Areas
NASA Astrophysics Data System (ADS)
Klein, E.; Shen-Tu, B.; Mahdyiar, M.; Karakhanyan, A.; Pagani, M.; Weatherill, G.; Gee, R. C.
2017-12-01
The Republic of Armenia is located within the central part of a large, 800 km wide, intracontinental collision zone between the Arabian and Eurasian plates. Active deformation occurs along numerous structures in the form of faulting, folding, and volcanism distributed throughout the entire zone from the Bitlis-Zargos suture belt to the Greater Caucasus Mountains and between the relatively rigid Back Sea and Caspian Sea blocks without any single structure that can be claimed as predominant. In recent years, significant work has been done on mapping active faults, compiling and reviewing historic and paleoseismological studies in the region, especially in Armenia; these recent research contributions have greatly improved our understanding of the seismogenic sources and their characteristics. In this study we performed a seismic hazard analysis for Armenia and its surrounding areas using the latest detailed geological and paleoseismological information on active faults, strain rates estimated from kinematic modeling of GPS data and all available historic earthquake data. The seismic source model uses a combination of characteristic earthquake and gridded seismicity models to take advantage of the detailed knowledge of the known faults while acknowledging the distributed deformation and regional tectonic environment of the collision zone. In addition, the fault model considers earthquake ruptures that include single and multi-segment or fault rupture scenarios with earthquakes that can rupture any part of a multiple segment fault zone. The ground motion model uses a set of ground motion prediction equations (GMPE) selected from a pool of GMPEs based on the assessment of each GMPE against the available strong motion data in the region. The hazard is computed in the GEM's OpenQuake engine. We will present final hazard results and discuss the uncertainties associated with various input data and their impact on the hazard at various locations.
Advanced Fault Diagnosis Methods in Molecular Networks
Habibi, Iman; Emamian, Effat S.; Abdi, Ali
2014-01-01
Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally. PMID:25290670
NASA Astrophysics Data System (ADS)
Seto, S.; Takahashi, T.
2017-12-01
In the 2011 Tohoku earthquake tsunami disaster, the delay of understanding damage situation increased the human damage. To solve this problem, it is important to search the severe damaged areas. The tsunami numerical modeling is useful to estimate damages and the accuracy of simulation depends on the tsunami source. Seto and Takahashi (2017) proposed a method to estimate the characterized tsunami source model by using the limited observed data of GPS buoys. The model consists of Large slip zone (LSZ), Super large slip zone (SLSZ) and background rupture zone (BZ) as the Cabinet Office, Government of Japan (below COGJ) reported after the Tohoku tsunami. At the beginning of this method, the rectangular fault model is assumed based on the seismic magnitude and hypocenter reported right after an earthquake. By using the fault model, tsunami propagation is simulated numerically, and the fault model is improved after comparing the computed data with the observed data repeatedly. In the comparison, correlation coefficient and regression coefficient are used as indexes. They are calculated with the observed and the computed tsunami wave profiles. This repetition is conducted to get the two coefficients close to 1.0, which makes the precise of the fault model higher. However, it was indicated as the improvement that the model did not examine a complicated shape of tsunami source. In this study, we proposed an improved model to examine the complicated shape. COGJ(2012) assumed that possible tsunami source region in the Nankai trough consisted of the several thousands small faults. And, we use these small faults to estimate the targeted tsunami source in this model. Therefore, we can estimate the complicated tsunami source by using these small faults. The estimation of BZ is carried out as a first step, and LSZ and SLSZ are estimated next as same as the previous model. The proposed model by using GPS buoy was applied for a tsunami scenario in the Nankai Trough. As a result, the final estimated location of LSZ and SLSZ in BZ are estimated well.
Physical and chemical controls on ore shoots - insights from 3D modeling of an orogenic gold deposit
NASA Astrophysics Data System (ADS)
Vollgger, S. A.; Tomkins, A. G.; Micklethwaite, S.; Cruden, A. R.; Wilson, C. J. L.
2016-12-01
Many ore deposits have irregular grade distributions with localized elongate and well-mineralized rock volumes commonly referred to as ore shoots. The chemical and physical processes that control ore shoot formation are rarely understood, although transient episodes of elevated permeability are thought to be important within the brittle and brittle-ductile crust, due to faulting and fracturing associated with earthquake-aftershock sequences or earthquake swarms. We present data from an orogenic gold deposit in Australia where the bulk of the gold is contained in abundant fine arsenopyrite crystals associated with a fault-vein network within tight upright folds. The deposit-scale fault network is connected to a deeper network of thrust faults (tens of kilometers long). Using 3D implicit modeling of geochemical data, based on radial basis functions, gold grades and gold-arsenic element ratios were interpolated and related to major faults, vein networks and late intrusions. Additionally, downhole bedding measurements were used to model first order (mine-scale) fold structures. The results show that ore shoot plunges are not parallel with mine-scale or regional fold plunges, and that bedding parallel faults related to flexural slip folding play a pivotal role on ore shoot attitudes. 3D fault slip and dilation tendency analysis indicate that fault reactivation and formation of linking faults are associated with large volumes of high-grade ore. We suggest slip events on the large-scale thrust network allowed mineralizing fluids to rapidly migrate over large distances and become supersaturated in elements such as gold, promoting widespread precipitation and high nucleation densities of arsenopyrite upon fluid-rock interaction at trap sites within the deposit.
NASA Astrophysics Data System (ADS)
Demir, Gökhan; aytekin, mustafa; banu ikizler, sabriye; angın, zekai
2013-04-01
The North Anatolian Fault is know as one of the most active and destructive fault zone which produced many earthquakes with high magnitudes. Along this fault zone, the morphology and the lithological features are prone to landsliding. However, many earthquake induced landslides were recorded by several studies along this fault zone, and these landslides caused both injuiries and live losts. Therefore, a detailed landslide susceptibility assessment for this area is indispancable. In this context, a landslide susceptibility assessment for the 1445 km2 area in the Kelkit River valley a part of North Anatolian Fault zone (Eastern Black Sea region of Turkey) was intended with this study, and the results of this study are summarized here. For this purpose, geographical information system (GIS) and a bivariate statistical model were used. Initially, Landslide inventory maps are prepared by using landslide data determined by field surveys and landslide data taken from General Directorate of Mineral Research and Exploration. The landslide conditioning factors are considered to be lithology, slope gradient, slope aspect, topographical elevation, distance to streams, distance to roads and distance to faults, drainage density and fault density. ArcGIS package was used to manipulate and analyze all the collected data Logistic regression method was applied to create a landslide susceptibility map. Landslide susceptibility maps were divided into five susceptibility regions such as very low, low, moderate, high and very high. The result of the analysis was verified using the inventoried landslide locations and compared with the produced probability model. For this purpose, Area Under Curvature (AUC) approach was applied, and a AUC value was obtained. Based on this AUC value, the obtained landslide susceptibility map was concluded as satisfactory. Keywords: North Anatolian Fault Zone, Landslide susceptibility map, Geographical Information Systems, Logistic Regression Analysis.
Hearn, Elizabeth H.; Koltermann, Christine; Rubinstein, Justin R.
2018-01-01
We have developed groundwater flow models to explore the possible relationship between wastewater injection and the 12 November 2014 Mw 4.8 Milan, Kansas earthquake. We calculate pore pressure increases in the uppermost crust using a suite of models in which hydraulic properties of the Arbuckle Formation and the Milan earthquake fault zone, the Milan earthquake hypocenter depth, and fault zone geometry are varied. Given pre‐earthquake injection volumes and reasonable hydrogeologic properties, significantly increasing pore pressure at the Milan hypocenter requires that most flow occur through a conductive channel (i.e., the lower Arbuckle and the fault zone) rather than a conductive 3‐D volume. For a range of reasonable lower Arbuckle and fault zone hydraulic parameters, the modeled pore pressure increase at the Milan hypocenter exceeds a minimum triggering threshold of 0.01 MPa at the time of the earthquake. Critical factors include injection into the base of the Arbuckle Formation and proximity of the injection point to a narrow fault damage zone or conductive fracture in the pre‐Cambrian basement with a hydraulic diffusivity of about 3–30 m2/s. The maximum pore pressure increase we obtain at the Milan hypocenter before the earthquake is 0.06 MPa. This suggests that the Milan earthquake occurred on a fault segment that was critically stressed prior to significant wastewater injection in the area. Given continued wastewater injection into the upper Arbuckle in the Milan region, assessment of the middle Arbuckle as a hydraulic barrier remains an important research priority.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Dongsheng; Wang Hongcai; Ma Yinsheng
In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.
Real-time diagnostics for a reusable rocket engine
NASA Technical Reports Server (NTRS)
Guo, T. H.; Merrill, W.; Duyar, A.
1992-01-01
A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.