Multiple Fault Isolation in Redundant Systems
NASA Technical Reports Server (NTRS)
Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David
1997-01-01
Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.
Multiple Fault Isolation in Redundant Systems
NASA Technical Reports Server (NTRS)
Pattipati, Krishna R.
1997-01-01
Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.
Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.
Fault detection and isolation for complex system
NASA Astrophysics Data System (ADS)
Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi
2017-07-01
Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
2017-01-01
Singular Perturbations represent an advantageous theory to deal with systems characterized by a two-time scale separation, such as the longitudinal dynamics of aircraft which are called phugoid and short period. In this work, the combination of the NonLinear Geometric Approach and the Singular Perturbations leads to an innovative Fault Detection and Isolation system dedicated to the isolation of faults affecting the air data system of a general aviation aircraft. The isolation capabilities, obtained by means of the approach proposed in this work, allow for the solution of a fault isolation problem otherwise not solvable by means of standard geometric techniques. Extensive Monte-Carlo simulations, exploiting a high fidelity aircraft simulator, show the effectiveness of the proposed Fault Detection and Isolation system. PMID:28946673
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines
NASA Astrophysics Data System (ADS)
Singh, Dheeraj Sharan; Zhao, Qing
2016-12-01
This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.
Effect of a Near Fault on the Seismic Response of a Base-Isolated Structure with a Soft Storey
NASA Astrophysics Data System (ADS)
Athamnia, B.; Ounis, A.; Abdeddaim, M.
2017-12-01
This study focuses on the soft-storey behavior of RC structures with lead core rubber bearing (LRB) isolation systems under near and far-fault motions. Under near-fault ground motions, seismic isolation devices might perform poorly because of large isolator displacements caused by large velocity and displacement pulses associated with such strong motions. In this study, four different structural models have been designed to study the effect of soft-storey behavior under near-fault and far-fault motions. The seismic analysis for isolated reinforced concrete buildings is carried out using a nonlinear time history analysis method. Inter-story drifts, absolute acceleration, displacement, base shear forces, hysteretic loops and the distribution of plastic hinges are examined as a result of the analysis. These results show that the performance of a base isolated RC structure is more affected by increasing the height of a story under nearfault motion than under far-fault motion.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
In-flight Fault Detection and Isolation in Aircraft Flight Control Systems
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Allanach, Jeffrey; Poll, Scott; Patterson-Hine, Ann
2005-01-01
In this paper we consider the problem of test design for real-time fault detection and isolation (FDI) in the flight control system of fixed-wing aircraft. We focus on the faults that are manifested in the control surface elements (e.g., aileron, elevator, rudder and stabilizer) of an aircraft. For demonstration purposes, we restrict our focus on the faults belonging to nine basic fault classes. The diagnostic tests are performed on the features extracted from fifty monitored system parameters. The proposed tests are able to uniquely isolate each of the faults at almost all severity levels. A neural network-based flight control simulator, FLTZ(Registered TradeMark), is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of FDI.
A PC based time domain reflectometer for space station cable fault isolation
NASA Technical Reports Server (NTRS)
Pham, Michael; McClean, Marty; Hossain, Sabbir; Vo, Peter; Kouns, Ken
1994-01-01
Significant problems are faced by astronauts on orbit in the Space Station when trying to locate electrical faults in multi-segment avionics and communication cables. These problems necessitate the development of an automated portable device that will detect and locate cable faults using the pulse-echo technique known as Time Domain Reflectometry. A breadboard time domain reflectometer (TDR) circuit board was designed and developed at the NASA-JSC. The TDR board works in conjunction with a GRiD lap-top computer to automate the fault detection and isolation process. A software program was written to automatically display the nature and location of any possible faults. The breadboard system can isolate open circuit and short circuit faults within two feet in a typical space station cable configuration. Follow-on efforts planned for 1994 will produce a compact, portable prototype Space Station TDR capable of automated switching in multi-conductor cables for high fidelity evaluation. This device has many possible commercial applications, including commercial and military aircraft avionics, cable TV, telephone, communication, information and computer network systems. This paper describes the principle of time domain reflectometry and the methodology for on-orbit avionics utility distribution system repair, utilizing the newly developed device called the Space Station Time Domain Reflectometer (SSTDR).
System and method for motor fault detection using stator current noise cancellation
Zhou, Wei; Lu, Bin; Nowak, Michael P.; Dimino, Steven A.
2010-12-07
A system and method for detecting incipient mechanical motor faults by way of current noise cancellation is disclosed. The system includes a controller configured to detect indicia of incipient mechanical motor faults. The controller further includes a processor programmed to receive a baseline set of current data from an operating motor and define a noise component in the baseline set of current data. The processor is also programmed to acquire at least on additional set of real-time operating current data from the motor during operation, redefine the noise component present in each additional set of real-time operating current data, and remove the noise component from the operating current data in real-time to isolate any fault components present in the operating current data. The processor is then programmed to generate a fault index for the operating current data based on any isolated fault components.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
System and method for bearing fault detection using stator current noise cancellation
Zhou, Wei; Lu, Bin; Habetler, Thomas G.; Harley, Ronald G.; Theisen, Peter J.
2010-08-17
A system and method for detecting incipient mechanical motor faults by way of current noise cancellation is disclosed. The system includes a controller configured to detect indicia of incipient mechanical motor faults. The controller further includes a processor programmed to receive a baseline set of current data from an operating motor and define a noise component in the baseline set of current data. The processor is also programmed to repeatedly receive real-time operating current data from the operating motor and remove the noise component from the operating current data in real-time to isolate any fault components present in the operating current data. The processor is then programmed to generate a fault index for the operating current data based on any isolated fault components.
Integral Sensor Fault Detection and Isolation for Railway Traction Drive.
Garramiola, Fernando; Del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka
2018-05-13
Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive.
Integral Sensor Fault Detection and Isolation for Railway Traction Drive
del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka
2018-01-01
Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive. PMID:29757251
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
ADEPT is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system, and is designed for two modes of operation: real-time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a Laser printer. This system consists of a simulated Space Station power module using direct-current power supplies for Solar arrays on three power busses. For tests of the system's ability to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three busses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modelling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base. A load scheduler and a fault recovery system are currently under development to support both modes of operation.
Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad
2018-05-22
The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Astrophysics Data System (ADS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-11-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Flight experience with a fail-operational digital fly-by-wire control system
NASA Technical Reports Server (NTRS)
Brown, S. R.; Szalai, K. J.
1977-01-01
The NASA Dryden Flight Research Center is flight testing a triply redundant digital fly-by-wire (DFBW) control system installed in an F-8 aircraft. The full-time, full-authority system performs three-axis flight control computations, including stability and command augmentation, autopilot functions, failure detection and isolation, and self-test functions. Advanced control law experiments include an active flap mode for ride smoothing and maneuver drag reduction. This paper discusses research being conducted on computer synchronization, fault detection, fault isolation, and recovery from transient faults. The F-8 DFBW system has demonstrated immunity from nuisance fault declarations while quickly identifying truly faulty components.
Fault Diagnosis of Power Systems Using Intelligent Systems
NASA Technical Reports Server (NTRS)
Momoh, James A.; Oliver, Walter E. , Jr.
1996-01-01
The power system operator's need for a reliable power delivery system calls for a real-time or near-real-time Al-based fault diagnosis tool. Such a tool will allow NASA ground controllers to re-establish a normal or near-normal degraded operating state of the EPS (a DC power system) for Space Station Alpha by isolating the faulted branches and loads of the system. And after isolation, re-energizing those branches and loads that have been found not to have any faults in them. A proposed solution involves using the Fault Diagnosis Intelligent System (FDIS) to perform near-real time fault diagnosis of Alpha's EPS by downloading power transient telemetry at fault-time from onboard data loggers. The FDIS uses an ANN clustering algorithm augmented with a wavelet transform feature extractor. This combination enables this system to perform pattern recognition of the power transient signatures to diagnose the fault type and its location down to the orbital replaceable unit. FDIS has been tested using a simulation of the LeRC Testbed Space Station Freedom configuration including the topology from the DDCU's to the electrical loads attached to the TPDU's. FDIS will work in conjunction with the Power Management Load Scheduler to determine what the state of the system was at the time of the fault condition. This information is used to activate the appropriate diagnostic section, and to refine if necessary the solution obtained. In the latter case, if the FDIS reports back that it is equally likely that the faulty device as 'start tracker #1' and 'time generation unit,' then based on a priori knowledge of the system's state, the refined solution would be 'star tracker #1' located in cabinet ITAS2. It is concluded from the present studies that artificial intelligence diagnostic abilities are improved with the addition of the wavelet transform, and that when such a system such as FDIS is coupled to the Power Management Load Scheduler, a faulty device can be located and isolated from the rest of the system. The benefit of these studies provides NASA with the ability to quickly restore the operating status of a space station from a critical state to a safe degraded mode, thereby saving costs in experimentation rescheduling, fault diagnostics, and prevention of loss-of-life.
A comparative study of sensor fault diagnosis methods based on observer for ECAS system
NASA Astrophysics Data System (ADS)
Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli
2017-03-01
The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.
GTEX: An expert system for diagnosing faults in satellite ground stations
NASA Technical Reports Server (NTRS)
Schlegelmilch, Richard F.; Durkin, John; Petrik, Edward J.
1991-01-01
A proof of concept expert system called Ground Terminal Expert (GTEX) was developed at The University of Akron in collaboration with NASA Lewis Research Center. The objective of GTEX is to aid in diagnosing data faults occurring with a digital ground terminal. This strategy can also be applied to the Very Small Aperture Terminal (VSAT) technology. An expert system which detects and diagnoses faults would enhance the performance of the VSAT by improving reliability and reducing maintenance time. GTEX is capable of detecting faults, isolating the cause and recommending appropriate actions. Isolation of faults is completed to board-level modules. A graphical user interface provides control and a medium where data can be requested and cryptic information logically displayed. Interaction with GTEX consists of user responses and input from data files. The use of data files provides a method of simulating dynamic interaction between the digital ground terminal and the expert system. GTEX as described is capable of both improving reliability and reducing the time required for necessary maintenance.
GTEX: An expert system for diagnosing faults in satellite ground stations
NASA Astrophysics Data System (ADS)
Schlegelmilch, Richard F.; Durkin, John; Petrik, Edward J.
1991-11-01
A proof of concept expert system called Ground Terminal Expert (GTEX) was developed at The University of Akron in collaboration with NASA Lewis Research Center. The objective of GTEX is to aid in diagnosing data faults occurring with a digital ground terminal. This strategy can also be applied to the Very Small Aperture Terminal (VSAT) technology. An expert system which detects and diagnoses faults would enhance the performance of the VSAT by improving reliability and reducing maintenance time. GTEX is capable of detecting faults, isolating the cause and recommending appropriate actions. Isolation of faults is completed to board-level modules. A graphical user interface provides control and a medium where data can be requested and cryptic information logically displayed. Interaction with GTEX consists of user responses and input from data files. The use of data files provides a method of simulating dynamic interaction between the digital ground terminal and the expert system. GTEX as described is capable of both improving reliability and reducing the time required for necessary maintenance.
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
NASA ground terminal communication equipment automated fault isolation expert systems
NASA Technical Reports Server (NTRS)
Tang, Y. K.; Wetzel, C. R.
1990-01-01
The prototype expert systems are described that diagnose the Distribution and Switching System I and II (DSS1 and DSS2), Statistical Multiplexers (SM), and Multiplexer and Demultiplexer systems (MDM) at the NASA Ground Terminal (NGT). A system level fault isolation expert system monitors the activities of a selected data stream, verifies that the fault exists in the NGT and identifies the faulty equipment. Equipment level fault isolation expert systems are invoked to isolate the fault to a Line Replaceable Unit (LRU) level. Input and sometimes output data stream activities for the equipment are available. The system level fault isolation expert system compares the equipment input and output status for a data stream and performs loopback tests (if necessary) to isolate the faulty equipment. The equipment level fault isolation system utilizes the process of elimination and/or the maintenance personnel's fault isolation experience stored in its knowledge base. The DSS1, DSS2 and SM fault isolation systems, using the knowledge of the current equipment configuration and the equipment circuitry issues a set of test connections according to the predefined rules. The faulty component or board can be identified by the expert system by analyzing the test results. The MDM fault isolation system correlates the failure symptoms with the faulty component based on maintenance personnel experience. The faulty component can be determined by knowing the failure symptoms. The DSS1, DSS2, SM, and MDM equipment simulators are implemented in PASCAL. The DSS1 fault isolation expert system was converted to C language from VP-Expert and integrated into the NGT automation software for offline switch diagnoses. Potentially, the NGT fault isolation algorithms can be used for the DSS1, SM, amd MDM located at Goddard Space Flight Center (GSFC).
Detection, isolation and diagnosability analysis of intermittent faults in stochastic systems
NASA Astrophysics Data System (ADS)
Yan, Rongyi; He, Xiao; Wang, Zidong; Zhou, D. H.
2018-02-01
Intermittent faults (IFs) have the properties of unpredictability, non-determinacy, inconsistency and repeatability, switching systems between faulty and healthy status. In this paper, the fault detection and isolation (FDI) problem of IFs in a class of linear stochastic systems is investigated. For the detection and isolation of IFs, it includes: (1) to detect all the appearing time and the disappearing time of an IF; (2) to detect each appearing (disappearing) time of the IF before the subsequent disappearing (appearing) time; (3) to determine where the IFs happen. Based on the outputs of the observers we designed, a novel set of residuals is constructed by using the sliding-time window technique, and two hypothesis tests are proposed to detect all the appearing time and disappearing time of IFs. The isolation problem of IFs is also considered. Furthermore, within a statistical framework, the definition of the diagnosability of IFs is proposed, and a sufficient condition is brought forward for the diagnosability of IFs. Quantitative performance analysis results for the false alarm rate and missing detection rate are discussed, and the influences of some key parameters of the proposed scheme on performance indices such as the false alarm rate and missing detection rate are analysed rigorously. The effectiveness of the proposed scheme is illustrated via a simulation example of an unmanned helicopter longitudinal control system.
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
Multi-thresholds for fault isolation in the presence of uncertainties.
Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel
2016-05-01
Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Flight elements: Fault detection and fault management
NASA Technical Reports Server (NTRS)
Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.
1990-01-01
Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.
AGSM Functional Fault Models for Fault Isolation Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
Fault detection and isolation in motion monitoring system.
Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B
2012-01-01
Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2014-01-01
This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults
NASA Astrophysics Data System (ADS)
Qin, Liguo; He, Xiao; Zhou, D. H.
2017-10-01
This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.
An architecture for the development of real-time fault diagnosis systems using model-based reasoning
NASA Technical Reports Server (NTRS)
Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday
1992-01-01
Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.
NASA Astrophysics Data System (ADS)
Bhagat, Satish; Wijeyewickrema, Anil C.
2017-04-01
This paper reports on an investigation of the seismic response of base-isolated reinforced concrete buildings, which considers various isolation system parameters under bidirectional near-fault and far-fault motions. Three-dimensional models of 4-, 8-, and 12-story base-isolated buildings with nonlinear effects in the isolation system and the superstructure are investigated, and nonlinear response history analysis is carried out. The bounding values of isolation system properties that incorporate the aging effect of isolators are also taken into account, as is the current state of practice in the design and analysis of base-isolated buildings. The response indicators of the buildings are studied for near-fault and far-fault motions weight-scaled to represent the design earthquake (DE) level and the risk-targeted maximum considered earthquake (MCER) level. Results of the nonlinear response history analyses indicate no structural damage under DE-level motions for near-fault and far-fault motions and for MCER-level far-fault motions, whereas minor structural damage is observed under MCER-level near-fault motions. Results of the base-isolated buildings are compared with their fixed-base counterparts. Significant reduction of the superstructure response of the 12-story base-isolated building compared to the fixed-base condition indicates that base isolation can be effectively used in taller buildings to enhance performance. Additionally, the applicability of a rigid superstructure to predict the isolator displacement demand is also investigated. It is found that the isolator displacements can be estimated accurately using a rigid body model for the superstructure for the buildings considered.
Automatic detection of electric power troubles (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint
1987-01-01
The design goals for the Automatic Detection of Electric Power Troubles (ADEPT) were to enhance Fault Diagnosis Techniques in a very efficient way. ADEPT system was designed in two modes of operation: (1) Real time fault isolation, and (2) a local simulator which simulates the models theoretically.
NASA Astrophysics Data System (ADS)
Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.
2015-12-01
Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
Machine learning techniques for fault isolation and sensor placement
NASA Technical Reports Server (NTRS)
Carnes, James R.; Fisher, Douglas H.
1993-01-01
Fault isolation and sensor placement are vital for monitoring and diagnosis. A sensor conveys information about a system's state that guides troubleshooting if problems arise. We are using machine learning methods to uncover behavioral patterns over snapshots of system simulations that will aid fault isolation and sensor placement, with an eye towards minimality, fault coverage, and noise tolerance.
TES: A modular systems approach to expert system development for real time space applications
NASA Technical Reports Server (NTRS)
England, Brenda; Cacace, Ralph
1987-01-01
A major goal of the space station era is to reduce reliance on support from ground based experts. The TIMES Expert System (TES) is an application that monitors and evaluates real time data to perform fault detection and fault isolation as it would otherwise be carried out by a knowledgeable designer. The development process and primary features of the TES, the modular system and the lessons learned are discussed.
Real-Time Diagnosis of Faults Using a Bank of Kalman Filters
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2006-01-01
A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.
NASA Technical Reports Server (NTRS)
Bernath, Greg
1994-01-01
In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-82,371] T-Mobile Usa, Inc., Core Fault Isolation Team, Engineering Division, Bethlehem, Pennsylvania; Notice of Affirmative Determination...., Core Fault Isolation Team, Engineering Division, Bethlehem, Pennsylvania (subject firm). The...
NASA Astrophysics Data System (ADS)
Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette
2016-04-01
Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems
NASA Technical Reports Server (NTRS)
Walker, M.; Figueroa, F.
2015-01-01
The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.
Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Oostdyk, Rebecca
2010-01-01
The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.
PV Systems Reliability Final Technical Report: Ground Fault Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrova, Olga; Flicker, Jack David; Johnson, Jay
We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
NASA Technical Reports Server (NTRS)
Lala, J. H.; Smith, T. B., III
1983-01-01
The experimental test and evaluation of the Fault-Tolerant Multiprocessor (FTMP) is described. Major objectives of this exercise include expanding validation envelope, building confidence in the system, revealing any weaknesses in the architectural concepts and in their execution in hardware and software, and in general, stressing the hardware and software. To this end, pin-level faults were injected into one LRU of the FTMP and the FTMP response was measured in terms of fault detection, isolation, and recovery times. A total of 21,055 stuck-at-0, stuck-at-1 and invert-signal faults were injected in the CPU, memory, bus interface circuits, Bus Guardian Units, and voters and error latches. Of these, 17,418 were detected. At least 80 percent of undetected faults are estimated to be on unused pins. The multiprocessor identified all detected faults correctly and recovered successfully in each case. Total recovery time for all faults averaged a little over one second. This can be reduced to half a second by including appropriate self-tests.
NASA Astrophysics Data System (ADS)
Le, Huy Xuan; Matunaga, Saburo
2014-12-01
This paper presents an adaptive unscented Kalman filter (AUKF) to recover the satellite attitude in a fault detection and diagnosis (FDD) subsystem of microsatellites. The FDD subsystem includes a filter and an estimator with residual generators, hypothesis tests for fault detections and a reference logic table for fault isolations and fault recovery. The recovery process is based on the monitoring of mean and variance values of each attitude sensor behaviors from residual vectors. In the case of normal work, the residual vectors should be in the form of Gaussian white noise with zero mean and fixed variance. When the hypothesis tests for the residual vectors detect something unusual by comparing the mean and variance values with dynamic thresholds, the AUKF with real-time updated measurement noise covariance matrix will be used to recover the sensor faults. The scheme developed in this paper resolves the problem of the heavy and complex calculations during residual generations and therefore the delay in the isolation process is reduced. The numerical simulations for TSUBAME, a demonstration microsatellite of Tokyo Institute of Technology, are conducted and analyzed to demonstrate the working of the AUKF and FDD subsystem.
Optimization of Second Fault Detection Thresholds to Maximize Mission POS
NASA Technical Reports Server (NTRS)
Anzalone, Evan
2018-01-01
In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2003-01-01
In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.
GenSAA: A tool for advancing satellite monitoring with graphical expert systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Luczak, Edward C.
1993-01-01
During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real time data for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At the NASA Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Luczak, Edward C.
1991-01-01
Flight Operations Analysts (FOAs) in the Payload Operations Control Center (POCC) are responsible for monitoring a satellite's health and safety. As satellites become more complex and data rates increase, FOAs are quickly approaching a level of information saturation. The FOAs in the spacecraft control center for the COBE (Cosmic Background Explorer) satellite are currently using a fault isolation expert system named the Communications Link Expert Assistance Resource (CLEAR), to assist in isolating and correcting communications link faults. Due to the success of CLEAR and several other systems in the control center domain, many other monitoring and fault isolation expert systems will likely be developed to support control center operations during the early 1990s. To facilitate the development of these systems, a project was initiated to develop a domain specific tool, named the Generic Spacecraft Analyst Assistant (GenSAA). GenSAA will enable spacecraft analysts to easily build simple real-time expert systems that perform spacecraft monitoring and fault isolation functions. Lessons learned during the development of several expert systems at Goddard, thereby establishing the foundation of GenSAA's objectives and offering insights in how problems may be avoided in future project, are described. This is followed by a description of the capabilities, architecture, and usage of GenSAA along with a discussion of its application to future NASA missions.
A distributed fault-tolerant signal processor /FTSP/
NASA Astrophysics Data System (ADS)
Bonneau, R. J.; Evett, R. C.; Young, M. J.
1980-01-01
A digital fault-tolerant signal processor (FTSP), an example of a self-repairing programmable system is analyzed. The design configuration is discussed in terms of fault tolerance, system-level fault detection, isolation and common memory. Special attention is given to the FDIR (fault detection isolation and reconfiguration) logic, noting that the reconfiguration decisions are based on configuration, summary status, end-around tests, and north marker/synchro data. Several mechanisms of fault detection are described which initiate reconfiguration at different levels. It is concluded that the reliability of a signal processor can be significantly enhanced by the use of fault-tolerant techniques.
Retrieving rupture history using waveform inversions in time sequence
NASA Astrophysics Data System (ADS)
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
NASA Astrophysics Data System (ADS)
Ozbulut, O. E.; Silwal, B.
2014-04-01
This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Shirah, Gregory W.; Luczak, Edward C.
1994-01-01
At NASA's Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analysts Assistant (GenSAA), was developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. This paper describes GenSAA's capabilities and how it is supporting monitoring functions of current and future NASA missions for a variety of satellite monitoring applications ranging from subsystem health and safety to spacecraft attitude. Finally, this paper addresses efforts to generalize GenSAA's data interface for more widespread usage throughout the space and commercial industry.
A hierarchically distributed architecture for fault isolation expert systems on the space station
NASA Technical Reports Server (NTRS)
Miksell, Steve; Coffer, Sue
1987-01-01
The Space Station Axiomatic Fault Isolating Expert Systems (SAFTIES) system deals with the hierarchical distribution of control and knowledge among independent expert systems doing fault isolation and scheduling of Space Station subsystems. On its lower level, fault isolation is performed on individual subsystems. These fault isolation expert systems contain knowledge about the performance requirements of their particular subsystem and corrective procedures which may be involved in repsonse to certain performance errors. They can control the functions of equipment in their system and coordinate system task schedules. On a higher level, the Executive contains knowledge of all resources, task schedules for all systems, and the relative priority of all resources and tasks. The executive can override any subsystem task schedule in order to resolve use conflicts or resolve errors that require resources from multiple subsystems. Interprocessor communication is implemented using the SAFTIES Communications Interface (SCI). The SCI is an application layer protocol which supports the SAFTIES distributed multi-level architecture.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
NASA Astrophysics Data System (ADS)
Khoshmanesh, M.; Shirzaei, M.
2017-12-01
Recent seismic and geodetic observations indicate that interseismic creep rate varies in both time and space. The spatial extent of creep determines the earthquake potential, while its temporal evolution, known as slow slip events (SSE), may trigger earthquakes. Although the conditions promoting fault creep are well-established, the mechanism for initiating self-sustaining and sometimes cyclic creep events is enigmatic. Here we investigate a time series of 19 years of surface deformation measured by radar interferometry between 1992 and 2011 along the Central San Andreas Fault (CSAF) to constrain the temporal evolution of creep. We show that the creep rate along the CSAF has a sporadic behavior, quantified with a Gumbel-like probability distribution characterized by longer tail toward the extreme positive rates, which is signature of burst-like creep dynamics. Defining creep avalanches as clusters of isolated creep with rates exceeding the shearing rate of tectonic plates, we investigate the statistical properties of their size and length. We show that, similar to the frequency-magnitude distribution of seismic events, the distribution of potency estimated for creep avalanches along the CSAF follows a power law, dictated by the distribution of their along-strike lengths. We further show that an ensemble of concurrent creep avalanches which aseismically rupture isolated fault compartments form the semi-periodic SSEs observed along the CSAF. Using a rate and state friction model, we show that normal stress is temporally variable on the fault, and support this using seismic observations. We propose that, through a self-sustaining fault-valve behavior, compaction induced elevation of pore pressure within hydraulically isolated fault compartments, and subsequent frictional dilation is the cause for the observed episodic SSEs. We further suggest that the 2004 Parkfield Mw6 earthquake may have been triggered by the SSE on adjacent creeping segment, which increased Coulomb failure stress up to 0.45 bar/yr. While creeping segments are suggested to act as barriers and arrest rupture, our study implies that SSEs on these zones may trigger seismic events on adjacent locked parts.
NASA Astrophysics Data System (ADS)
Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.
2018-02-01
The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.
Fault Detection, Isolation and Recovery (FDIR) Portable Liquid Oxygen Hardware Demonstrator
NASA Technical Reports Server (NTRS)
Oostdyk, Rebecca L.; Perotti, Jose M.
2011-01-01
The Fault Detection, Isolation and Recovery (FDIR) hardware demonstration will highlight the effort being conducted by Constellation's Ground Operations (GO) to provide the Launch Control System (LCS) with system-level health management during vehicle processing and countdown activities. A proof-of-concept demonstration of the FDIR prototype established the capability of the software to provide real-time fault detection and isolation using generated Liquid Hydrogen data. The FDIR portable testbed unit (presented here) aims to enhance FDIR by providing a dynamic simulation of Constellation subsystems that feed the FDIR software live data based on Liquid Oxygen system properties. The LO2 cryogenic ground system has key properties that are analogous to the properties of an electronic circuit. The LO2 system is modeled using electrical components and an equivalent circuit is designed on a printed circuit board to simulate the live data. The portable testbed is also be equipped with data acquisition and communication hardware to relay the measurements to the FDIR application running on a PC. This portable testbed is an ideal capability to perform FDIR software testing, troubleshooting, training among others.
The Buffer Diagnostic Prototype: A fault isolation application using CLIPS
NASA Technical Reports Server (NTRS)
Porter, Ken
1994-01-01
This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.
Fault management for the Space Station Freedom control center
NASA Technical Reports Server (NTRS)
Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet
1992-01-01
This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.
NASA Astrophysics Data System (ADS)
Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.
2016-12-01
Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.
Usage of Fault Detection Isolation & Recovery (FDIR) in Constellation (CxP) Launch Operations
NASA Technical Reports Server (NTRS)
Ferrell, Rob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Spirkovska, Lilly; Hall, David; Brown, Barbara
2010-01-01
This paper will explore the usage of Fault Detection Isolation & Recovery (FDIR) in the Constellation Exploration Program (CxP), in particular Launch Operations at Kennedy Space Center (KSC). NASA's Exploration Technology Development Program (ETDP) is currently funding a project that is developing a prototype FDIR to demonstrate the feasibility of incorporating FDIR into the CxP Ground Operations Launch Control System (LCS). An architecture that supports multiple FDIR tools has been formulated that will support integration into the CxP Ground Operation's Launch Control System (LCS). In addition, tools have been selected that provide fault detection, fault isolation, and anomaly detection along with integration between Flight and Ground elements.
Autonomous power expert system advanced development
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
The autonomous power expert (APEX) system is being developed at Lewis Research Center to function as a fault diagnosis advisor for a space power distribution test bed. APEX is a rule-based system capable of detecting faults and isolating the probable causes. APEX also has a justification facility to provide natural language explanations about conclusions reached during fault isolation. To help maintain the health of the power distribution system, additional capabilities were added to APEX. These capabilities allow detection and isolation of incipient faults and enable the expert system to recommend actions/procedure to correct the suspected fault conditions. New capabilities for incipient fault detection consist of storage and analysis of historical data and new user interface displays. After the cause of a fault is determined, appropriate recommended actions are selected by rule-based inferencing which provides corrective/extended test procedures. Color graphics displays and improved mouse-selectable menus were also added to provide a friendlier user interface. A discussion of APEX in general and a more detailed description of the incipient detection, recommended actions, and user interface developments during the last year are presented.
Fault detection, isolation, and diagnosis of self-validating multifunctional sensors.
Yang, Jing-Li; Chen, Yin-Sheng; Zhang, Li-Li; Sun, Zhen
2016-06-01
A novel fault detection, isolation, and diagnosis (FDID) strategy for self-validating multifunctional sensors is presented in this paper. The sparse non-negative matrix factorization-based method can effectively detect faults by using the squared prediction error (SPE) statistic, and the variables contribution plots based on SPE statistic can help to locate and isolate the faulty sensitive units. The complete ensemble empirical mode decomposition is employed to decompose the fault signals to a series of intrinsic mode functions (IMFs) and a residual. The sample entropy (SampEn)-weighted energy values of each IMFs and the residual are estimated to represent the characteristics of the fault signals. Multi-class support vector machine is introduced to identify the fault mode with the purpose of diagnosing status of the faulty sensitive units. The performance of the proposed strategy is compared with other fault detection strategies such as principal component analysis, independent component analysis, and fault diagnosis strategies such as empirical mode decomposition coupled with support vector machine. The proposed strategy is fully evaluated in a real self-validating multifunctional sensors experimental system, and the experimental results demonstrate that the proposed strategy provides an excellent solution to the FDID research topic of self-validating multifunctional sensors.
Methodologies for Adaptive Flight Envelope Estimation and Protection
NASA Technical Reports Server (NTRS)
Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine
2009-01-01
This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.
Model-Based Diagnostics for Propellant Loading Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.
2011-01-01
The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
Multiple incipient sensor faults diagnosis with application to high-speed railway traction devices.
Wu, Yunkai; Jiang, Bin; Lu, Ningyun; Yang, Hao; Zhou, Yang
2017-03-01
This paper deals with the problem of incipient fault diagnosis for a class of Lipschitz nonlinear systems with sensor biases and explores further results of total measurable fault information residual (ToMFIR). Firstly, state and output transformations are introduced to transform the original system into two subsystems. The first subsystem is subject to system disturbances and free from sensor faults, while the second subsystem contains sensor faults but without any system disturbances. Sensor faults in the second subsystem are then formed as actuator faults by using a pseudo-actuator based approach. Since the effects of system disturbances on the residual are completely decoupled, multiple incipient sensor faults can be detected by constructing ToMFIR, and the fault detectability condition is then derived for discriminating the detectable incipient sensor faults. Further, a sliding-mode observers (SMOs) based fault isolation scheme is designed to guarantee accurate isolation of multiple sensor faults. Finally, simulation results conducted on a CRH2 high-speed railway traction device are given to demonstrate the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2009-01-01
Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2015-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2014-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
Fault-Tolerant Local-Area Network
NASA Technical Reports Server (NTRS)
Morales, Sergio; Friedman, Gary L.
1988-01-01
Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
A satellite-based digital data system for low-frequency geophysical data
Silverman, S.; Mortensen, C.; Johnston, M.
1989-01-01
A reliable method for collection, display, and analysis of low-frequency geophysical data from isolated sites, which can be throughout North and South America and the Pacific Rim, has been developed for use with the Geostationary Operational Environmental Satellite (GEOS) system. This system provides real-time monitoring of crustal deformation parameters such as tilt, strain, fault displacement, local magnetic field, crustal geochemistry, and water levels, as well as meteorological and other parameters, along faults in California and Alsaka, and in volcanic regions in the western United States, Rabaul, and other locations in the New Britain region of the South pacific. Various mathematical, statistical, and graphical algorithms process the incoming data to detect changes in crustal deformation and fault slip that may indicate the first stages of catastrophic fault failure. -from Authors
NASA Technical Reports Server (NTRS)
Dumas, A.
1981-01-01
Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
NASA Astrophysics Data System (ADS)
Kodali, Anuradha
In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.
NASA Technical Reports Server (NTRS)
Ferell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Goerz, Jesse; Brown, Barbara
2010-01-01
This paper's main purpose is to detail issues and lessons learned regarding designing, integrating, and implementing Fault Detection Isolation and Recovery (FDIR) for Constellation Exploration Program (CxP) Ground Operations at Kennedy Space Center (KSC).
Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments
Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel
2011-01-01
There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593
Detecting and isolating abrupt changes in linear switching systems
NASA Astrophysics Data System (ADS)
Nazari, Sohail; Zhao, Qing; Huang, Biao
2015-04-01
In this paper, a novel fault detection and isolation (FDI) method for switching linear systems is developed. All input and output signals are assumed to be corrupted with measurement noises. In the proposed method, a 'lifted' linear model named as stochastic hybrid decoupling polynomial (SHDP) is introduced. The SHDP model governs the dynamics of the switching linear system with all different modes, and is independent of the switching sequence. The error-in-variable (EIV) representation of SHDP is derived, and is used for the fault residual generation and isolation following the well-adopted local approach. The proposed FDI method can detect and isolate the fault-induced abrupt changes in switching models' parameters without estimating the switching modes. Furthermore, in this paper, the analytical expressions of the gradient vector and Hessian matrix are obtained based on the EIV SHDP formulation, so that they can be used to implement the online fault detection scheme. The performance of the proposed method is then illustrated by simulation examples.
Multiple sensor fault diagnosis for dynamic processes.
Li, Cheng-Chih; Jeng, Jyh-Cheng
2010-10-01
Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Multiscale Dynamics of Aseismic Slip on Central San Andreas Fault
NASA Astrophysics Data System (ADS)
Khoshmanesh, M.; Shirzaei, M.
2018-03-01
Understanding the evolution of aseismic slip enables constraining the fault's seismic budget and provides insight into dynamics of creep. Inverting the time series of surface deformation measured along the Central San Andreas Fault obtained from interferometric synthetic aperture radar in combination with measurements of repeating earthquakes, we constrain the spatiotemporal distribution of creep during 1992-2010. We identify a new class of intermediate-term creep rate variations that evolve over decadal scale, releasing stress on the accelerating zone and loading adjacent decelerating patches. We further show that in short-term (<2 year period), creep avalanches, that is, isolated clusters of accelerated aseismic slip with velocities exceeding the long-term rate, govern the dynamics of creep. The statistical properties of these avalanches suggest existence of elevated pore pressure in the fault zone, consistent with laboratory experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-09
GENI Project: General Atomics is developing a direct current (DC) circuit breaker that could protect the grid from faults 100 times faster than its alternating current (AC) counterparts. Circuit breakers are critical elements in any electrical system. At the grid level, their main function is to isolate parts of the grid where a fault has occurred—such as a downed power line or a transformer explosion—from the rest of the system. DC circuit breakers must interrupt the system during a fault much faster than AC circuit breakers to prevent possible damage to cables, converters and other grid-level components. General Atomics’ high-voltagemore » DC circuit breaker would react in less than 1/1,000th of a second to interrupt current during a fault, preventing potential hazards to people and equipment.« less
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Oostdyk, Rebecca; Perotti, Jose
2009-01-01
When setting out to model and/or simulate a complex mechanical or electrical system, a modeler is faced with a vast array of tools, software, equations, algorithms and techniques that may individually or in concert aid in the development of the model. Mature requirements and a well understood purpose for the model may considerably shrink the field of possible tools and algorithms that will suit the modeling solution. Is the model intended to be used in an offline fashion or in real-time? On what platform does it need to execute? How long will the model be allowed to run before it outputs the desired parameters? What resolution is desired? Do the parameters need to be qualitative or quantitative? Is it more important to capture the physics or the function of the system in the model? Does the model need to produce simulated data? All these questions and more will drive the selection of the appropriate tools and algorithms, but the modeler must be diligent to bear in mind the final application throughout the modeling process to ensure the model meets its requirements without needless iterations of the design. The purpose of this paper is to describe the considerations and techniques used in the process of creating a functional fault model of a liquid hydrogen (LH2) system that will be used in a real-time environment to automatically detect and isolate failures.
NASA Astrophysics Data System (ADS)
Haziza, M.
1990-10-01
The DIAMS satellite fault isolation expert system shell concept is described. The project, initiated in 1985, has led to the development of a prototype Expert System (ES) dedicated to the Telecom 1 attitude and orbit control system. The prototype ES has been installed in the Telecom 1 satellite control center and evaluated by Telecom 1 operations. The development of a fault isolation ES covering a whole spacecraft (the French telecommunication satellite Telecom 2) is currently being undertaken. Full scale industrial applications raise stringent requirements in terms of knowledge management and software development methodology. The approach used by MATRA ESPACE to face this challenge is outlined.
Earthquake behavior along the Levant fault from paleoseismology (Invited)
NASA Astrophysics Data System (ADS)
Klinger, Y.; Le Beon, M.; Wechsler, N.; Rockwell, T. K.
2013-12-01
The Levant fault is a major continental structure 1200 km-long that bounds the Arabian plate to the west. The finite offset of this left-lateral strike-slip fault is estimated to be 105 km for the section located south of the restraining bend corresponding roughly to Lebanon. Along this southern section the slip-rate has been estimated over a large range of time scales, from few years to few hundreds thousands of years. Over these different time scales, studies agree for the slip-rate to be 5mm/yr × 2 mm/yr. The southern section of the Levant fault is particularly attractive to study earthquake behavior through time for several reasons: 1/ The fault geometry is simple and well constrained. 2/ The fault system is isolated and does not interact with obvious neighbor fault systems. 3/ The Middle-East, where the Levant fault is located, is the region in the world where one finds the longest and most complete historical record of past earthquakes. About 30 km north of the city of Aqaba, we opened a trench in the southern part of the Yotvata playa, along the Wadi Araba fault segment. The stratigraphy presents silty sand playa units alternating with coarser sand sediments from alluvial fans flowing westwards from the Jordan plateau. Two fault zones can be recognized in the trench and a minimum of 8 earthquakes can be identified, based on upward terminations of ground ruptures. Dense 14C dating through the entire exposure allows matching the 4 most recent events with historical events in AD1458, AD1212, AD1068 and AD748. Size of the ground rupture suggests a bi-modal distribution of earthquakes with earthquakes rupturing the entire Wadi Araba segment and earthquakes ending in the extensional jog forming the playa. Timing of earthquakes shows that no earthquakes occurred at this site since about 600 years, suggesting earthquake clustering along this section of the fault and potential for a large earthquake in the near future. 3D paleoseismological trenches at the Beteiha site, north of the lake Tiberias, show that there the earthquake activity varies significantly through time, with periods of intense seismic activity associated to small horizontal offsets and periods of bigger earthquakes with larger offsets. Hence, earthquake clustering also seems to govern earthquake occurrence along this segment of the Levant fault. On the contrary, further north, where the fault bends and deformation is spread between several parallel faults, paleoseismological trenches at the Yammouneh site show that earthquakes seem to be fairly regular every 800 years. Such difference in behavior along different sections of the fault suggests that the fault geometry might play an important role in the way earthquakes are distributed through time.
NASA Astrophysics Data System (ADS)
McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.
2011-12-01
Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here suggests that fault healing produces an increase in fault strength heterogeneity on a small spatial scale. A fault which has healed may possess an asperity population which will allow less slip to be accumulated aseismically, will rupture faster and more violently, and produce more high frequency seismic waves than one which has not healed.
NASA Astrophysics Data System (ADS)
Davoodi, M.; Meskin, N.; Khorasani, K.
2018-03-01
The problem of simultaneous fault detection, isolation and tracking (SFDIT) control design for linear systems subject to both bounded energy and bounded peak disturbances is considered in this work. A dynamic observer is proposed and implemented by using the H∞/H-/L1 formulation of the SFDIT problem. A single dynamic observer module is designed that generates the residuals as well as the control signals. The objective of the SFDIT module is to ensure that simultaneously the effects of disturbances and control signals on the residual signals are minimised (in order to accomplish the fault detection goal) subject to the constraint that the transfer matrix from the faults to the residuals is equal to a pre-assigned diagonal transfer matrix (in order to accomplish the fault isolation goal), while the effects of disturbances, reference inputs and faults on the specified control outputs are minimised (in order to accomplish the fault-tolerant and tracking control goals). A set of linear matrix inequality (LMI) feasibility conditions are derived to ensure solvability of the problem. In order to illustrate and demonstrate the effectiveness of our proposed design methodology, the developed and proposed schemes are applied to an autonomous unmanned underwater vehicle (AUV).
Fault Analysis in Solar Photovoltaic Arrays
NASA Astrophysics Data System (ADS)
Zhao, Ye
Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.
NASA Technical Reports Server (NTRS)
Sweet, Adam
2008-01-01
The IVHM Project in the Aviation Safety Program has funded research in electrical power system (EPS) health management. This problem domain contains both discrete and continuous behavior, and thus is directly relevant for the hybrid diagnostic tool HyDE. In FY2007 work was performed to expand the HyDE diagnosis model of the ADAPT system. The work completed resulted in a HyDE model with the capability to diagnose five times the number of ADAPT components previously tested. The expanded diagnosis model passed a corresponding set of new ADAPT fault injection scenario tests with no incorrect faults reported. The time required for the HyDE diagnostic system to isolate the fault varied widely between tests; this variance was reduced by tuning HyDE input parameters. These results and other diagnostic design trade-offs are discussed. Finally, possible future improvements for both the HyDE diagnostic model and HyDE itself are presented.
Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Martin, Rodney Alexander; Waterman, Robert D.; Oostdyk, Rebecca Lynn; Ossenfort, John P.; Matthews, Bryan
2010-01-01
The automation of pre-launch diagnostics for launch vehicles offers three potential benefits: improving safety, reducing cost, and reducing launch delays. The Ares I-X Ground Diagnostic Prototype demonstrated anomaly detection, fault detection, fault isolation, and diagnostics for the Ares I-X first-stage Thrust Vector Control and for the associated ground hydraulics while the vehicle was in the Vehicle Assembly Building at Kennedy Space Center (KSC) and while it was on the launch pad. The prototype combines three existing tools. The first tool, TEAMS (Testability Engineering and Maintenance System), is a model-based tool from Qualtech Systems Inc. for fault isolation and diagnostics. The second tool, SHINE (Spacecraft Health Inference Engine), is a rule-based expert system that was developed at the NASA Jet Propulsion Laboratory. We developed SHINE rules for fault detection and mode identification, and used the outputs of SHINE as inputs to TEAMS. The third tool, IMS (Inductive Monitoring System), is an anomaly detection tool that was developed at NASA Ames Research Center. The three tools were integrated and deployed to KSC, where they were interfaced with live data. This paper describes how the prototype performed during the period of time before the launch, including accuracy and computer resource usage. The paper concludes with some of the lessons that we learned from the experience of developing and deploying the prototype.
KEA-71 Smart Current Signature Sensor (SCSS)
NASA Technical Reports Server (NTRS)
Perotti, Jose M.
2010-01-01
This slide presentation reviews the development and uses of the Smart Current Signature Sensor (SCSS), also known as the Valve Health Monitor (VHM) system. SCSS provides a way to not only monitor real-time the valve's operation in a non invasive manner, but also to monitor its health (Fault Detection and Isolation) and identify potential faults and/or degradation in the near future (Prediction/Prognosis). This technology approach is not only applicable for solenoid valves, and it could be extrapolated to other electrical components with repeatable electrical current signatures such as motors.
NASA Astrophysics Data System (ADS)
Jeppesen, Christian; Araya, Samuel Simon; Sahlin, Simon Lennart; Thomas, Sobi; Andreasen, Søren Juhl; Kær, Søren Knudsen
2017-08-01
This study proposes a data-drive impedance-based methodology for fault detection and isolation of low and high cathode stoichiometry, high CO concentration in the anode gas, high methanol vapour concentrations in the anode gas and low anode stoichiometry, for high temperature PEM fuel cells. The fault detection and isolation algorithm is based on an artificial neural network classifier, which uses three extracted features as input. Two of the proposed features are based on angles in the impedance spectrum, and are therefore relative to specific points, and shown to be independent of degradation, contrary to other available feature extraction methods in the literature. The experimental data is based on a 35 day experiment, where 2010 unique electrochemical impedance spectroscopy measurements were recorded. The test of the algorithm resulted in a good detectability of the faults, except for high methanol vapour concentration in the anode gas fault, which was found to be difficult to distinguish from a normal operational data. The achieved accuracy for faults related to CO pollution, anode- and cathode stoichiometry is 100% success rate. Overall global accuracy on the test data is 94.6%.
The Generic Spacecraft Analyst Assistant (gensaa): a Tool for Developing Graphical Expert Systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.
1993-01-01
During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real-time data. The analysts must watch for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As the satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At NASA GSFC, fault-isolation expert systems are in operation supporting this data monitoring task. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will readily support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, S.Y.; Watkins, J.S.
Mapping of Miocene stratigraphy and structure of the Sabine Pass, West Cameron, and East Cameron areas of the western Louisiana outer continental shelf - based on over 1300 mi of seismic data on a 4-mi grid, paleotops from 60 wells, and logs from 35 wells - resulted in time-structure and isochron maps at six intervals from the upper Pliocene to lower Miocene. The most pronounced structural features are the fault systems, which trend east-northeast to east along the Miocene stratigraphic trend. Isolated normal faults with small displacements characterize the inner inner shelf, whereas interconnected faults with greater displacements characterize themore » outer inner shelf. The inner inner shelf faults exhibit little growth, but expansion across the interconnected outer inner shelf fault ranges up to 1 sec two-way traveltime. The interconnected faults belong to two structurally independent fault families. The innermost shelf faults appear to root in the sediment column. A third set of faults located in the Sabine Pass area trends north-south. This fault set is thought to be related to basement movement and/or basement structure. Very little salt is evident in the area. A single diapir is located in West Cameron Block 110 and vicinity. There is little evidence of deep salt. Overall sediment thickness probably exceeds 20,000 ft, with the middle Miocene accounting for 8000 ft.« less
NASA Space Flight Vehicle Fault Isolation Challenges
NASA Technical Reports Server (NTRS)
Bramon, Christopher; Inman, Sharon K.; Neeley, James R.; Jones, James V.; Tuttle, Loraine
2016-01-01
The Space Launch System (SLS) is the new NASA heavy lift launch vehicle and is scheduled for its first mission in 2017. The goal of the first mission, which will be uncrewed, is to demonstrate the integrated system performance of the SLS rocket and spacecraft before a crewed flight in 2021. SLS has many of the same logistics challenges as any other large scale program. Common logistics concerns for SLS include integration of discrete programs geographically separated, multiple prime contractors with distinct and different goals, schedule pressures and funding constraints. However, SLS also faces unique challenges. The new program is a confluence of new hardware and heritage, with heritage hardware constituting seventy-five percent of the program. This unique approach to design makes logistics concerns such as testability of the integrated flight vehicle especially problematic. The cost of fully automated diagnostics can be completely justified for a large fleet, but not so for a single flight vehicle. Fault detection is mandatory to assure the vehicle is capable of a safe launch, but fault isolation is another issue. SLS has considered various methods for fault isolation which can provide a reasonable balance between adequacy, timeliness and cost. This paper will address the analyses and decisions the NASA Logistics engineers are making to mitigate risk while providing a reasonable testability solution for fault isolation.
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
Real-time diagnostics for a reusable rocket engine
NASA Technical Reports Server (NTRS)
Guo, T. H.; Merrill, W.; Duyar, A.
1992-01-01
A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.
Improved Sensor Fault Detection, Isolation, and Mitigation Using Multiple Observers Approach
Wang, Zheng; Anand, D. M.; Moyne, J.; Tilbury, D. M.
2017-01-01
Traditional Fault Detection and Isolation (FDI) methods analyze a residual signal to detect and isolate sensor faults. The residual signal is the difference between the sensor measurements and the estimated outputs of the system based on an observer. The traditional residual-based FDI methods, however, have some limitations. First, they require that the observer has reached its steady state. In addition, residual-based methods may not detect some sensor faults, such as faults on critical sensors that result in an unobservable system. Furthermore, the system may be in jeopardy if actions required for mitigating the impact of the faulty sensors are not taken before the faulty sensors are identified. The contribution of this paper is to propose three new methods to address these limitations. Faults that occur during the observers' transient state can be detected by analyzing the convergence rate of the estimation error. Open-loop observers, which do not rely on sensor information, are used to detect faults on critical sensors. By switching among different observers, we can potentially mitigate the impact of the faulty sensor during the FDI process. These three methods are systematically integrated with a previously developed residual-based method to provide an improved FDI and mitigation capability framework. The overall approach is validated mathematically, and the effectiveness of the overall approach is demonstrated through simulation on a 5-state suspension system. PMID:28924303
Methanogenic archaea isolated from Taiwan's Chelungpu fault.
Wu, Sue-Yao; Lai, Mei-Chin
2011-02-01
Terrestrial rocks, petroleum reservoirs, faults, coal seams, and subseafloor gas hydrates contain an abundance of diverse methanoarchaea. However, reports on the isolation, purification, and characterization of methanoarchaea in the subsurface environment are rare. Currently, no studies investigating methanoarchaea within fault environments exist. In this report, we succeeded in obtaining two new methanogen isolates, St545Mb(T) of newly proposed species Methanolobus chelungpuianus and Methanobacterium palustre FG694aF, from the Chelungpu fault, which is the fault that caused a devastating earthquake in central Taiwan in 1999. Strain FG694aF was isolated from a fault gouge sample obtained at 694 m below land surface (mbls) and is an autotrophic, mesophilic, nonmotile, thin, filamentous-rod-shaped organism capable of using H(2)-CO(2) and formate as substrates for methanogenesis. The morphological, biochemical, and physiological characteristics and 16S rRNA gene sequence analysis revealed that this isolate belongs to Methanobacterium palustre. The mesophilic strain St545Mb(T), isolated from a sandstone sample at 545 mbls, is a nonmotile, irregular, coccoid organism that uses methanol and trimethylamine as substrates for methanogenesis. The 16S rRNA gene sequence of strain St545Mb(T) was 99.0% similar to that of Methanolobus psychrophilus strain R15 and was 96 to 97.5% similar to the those of other Methanolobus species. However, the optimal growth temperature and total cell protein profile of strain St545Mb(T) were different from those of M. psychrophilus strain R15, and whole-genome DNA-DNA hybridization revealed less than 20% relatedness between these two strains. On the basis of these observations, we propose that strain St545Mb(T) (DSM 19953(T); BCRC AR10030; JCM 15159) be named Methanolobus chelungpuianus sp. nov. Moreover, the environmental DNA database survey indicates that both Methanolobus chelungpuianus and Methanobacterium palustre are widespread in the subsurface environment.
Orion GN&C Fault Management System Verification: Scope And Methodology
NASA Technical Reports Server (NTRS)
Brown, Denise; Weiler, David; Flanary, Ronald
2016-01-01
In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2004-01-01
In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.
NASA Astrophysics Data System (ADS)
Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel
2016-12-01
This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.
A Solid-State Fault Current Limiting Device for VSC-HVDC Systems
NASA Astrophysics Data System (ADS)
Larruskain, D. Marene; Zamora, Inmaculada; Abarrategui, , Oihane; Iturregi, Araitz
2013-08-01
Faults in the DC circuit constitute one of the main limitations of voltage source converter VSC-HVDC systems, as the high fault currents can damage seriously the converters. In this article, a new design for a fault current limiter (FCL) is proposed, which is capable of limiting the fault current as well as interrupting it, isolating the DC grid. The operation of the proposed FCL is analysed and verified with the most usual faults that can occur in overhead lines.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
CLEAR: Communications Link Expert Assistance Resource
NASA Technical Reports Server (NTRS)
Hull, Larry G.; Hughes, Peter M.
1987-01-01
Communications Link Expert Assistance Resource (CLEAR) is a real time, fault diagnosis expert system for the Cosmic Background Explorer (COBE) Mission Operations Room (MOR). The CLEAR expert system is an operational prototype which assists the MOR operator/analyst by isolating and diagnosing faults in the spacecraft communication link with the Tracking and Data Relay Satellite (TDRS) during periods of realtime data acquisition. The mission domain, user requirements, hardware configuration, expert system concept, tool selection, development approach, and system design were discussed. Development approach and system implementation are emphasized. Also discussed are system architecture, tool selection, operation, and future plans.
Neural networks and fault probability evaluation for diagnosis issues.
Kourd, Yahia; Lefebvre, Dimitri; Guersi, Noureddine
2014-01-01
This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method.
Mission Management Computer and Sequencing Hardware for RLV-TD HEX-01 Mission
NASA Astrophysics Data System (ADS)
Gupta, Sukrat; Raj, Remya; Mathew, Asha Mary; Koshy, Anna Priya; Paramasivam, R.; Mookiah, T.
2017-12-01
Reusable Launch Vehicle-Technology Demonstrator Hypersonic Experiment (RLV-TD HEX-01) mission posed some unique challenges in the design and development of avionics hardware. This work presents the details of mission critical avionics hardware mainly Mission Management Computer (MMC) and sequencing hardware. The Navigation, Guidance and Control (NGC) chain for RLV-TD is dual redundant with cross-strapped Remote Terminals (RTs) interfaced through MIL-STD-1553B bus. MMC is Bus Controller on the 1553 bus, which does the function of GPS aided navigation, guidance, digital autopilot and sequencing for the RLV-TD launch vehicle in different periodicities (10, 20, 500 ms). Digital autopilot execution in MMC with a periodicity of 10 ms (in ascent phase) is introduced for the first time and successfully demonstrated in the flight. MMC is built around Intel i960 processor and has inbuilt fault tolerance features like ECC for memories. Fault Detection and Isolation schemes are implemented to isolate the failed MMC. The sequencing hardware comprises Stage Processing System (SPS) and Command Execution Module (CEM). SPS is `RT' on the 1553 bus which receives the sequencing and control related commands from MMCs and posts to downstream modules after proper error handling for final execution. SPS is designed as a high reliability system by incorporating various fault tolerance and fault detection features. CEM is a relay based module for sequence command execution.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, a baseline system which utilizes dual-channel sensor measurements for aircraft engine on-line diagnostics is developed. This system is composed of a linear on-board engine model (LOBEM) and fault detection and isolation (FDI) logic. The LOBEM provides the analytical third channel against which the dual-channel measurements are compared. When the discrepancy among the triplex channels exceeds a tolerance level, the FDI logic determines the cause of the discrepancy. Through this approach, the baseline system achieves the following objectives: (1) anomaly detection, (2) component fault detection, and (3) sensor fault detection and isolation. The performance of the baseline system is evaluated in a simulation environment using faults in sensors and components.
Implementing a real time reasoning system for robust diagnosis
NASA Technical Reports Server (NTRS)
Hill, Tim; Morris, William; Robertson, Charlie
1993-01-01
The objective of the Thermal Control System Automation Project (TCSAP) is to develop an advanced fault detection, isolation, and recovery (FDIR) capability for use on the Space Station Freedom (SSF) External Active Thermal Control System (EATCS). Real-time monitoring, control, and diagnosis of the EATCS will be performed with a knowledge based system (KBS). Implementation issues for the current version of the KBS are discussed.
Fail-Safe Design for Large Capacity Lithium-Ion Battery Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, G. H.; Smith, K.; Ireland, J.
2012-07-15
A fault leading to a thermal runaway in a lithium-ion battery is believed to grow over time from a latent defect. Significant efforts have been made to detect lithium-ion battery safety faults to proactively facilitate actions minimizing subsequent losses. Scaling up a battery greatly changes the thermal and electrical signals of a system developing a defect and its consequent behaviors during fault evolution. In a large-capacity system such as a battery for an electric vehicle, detecting a fault signal and confining the fault locally in the system are extremely challenging. This paper introduces a fail-safe design methodology for large-capacity lithium-ionmore » battery systems. Analysis using an internal short circuit response model for multi-cell packs is presented that demonstrates the viability of the proposed concept for various design parameters and operating conditions. Locating a faulty cell in a multiple-cell module and determining the status of the fault's evolution can be achieved using signals easily measured from the electric terminals of the module. A methodology is introduced for electrical isolation of a faulty cell from the healthy cells in a system to prevent further electrical energy feed into the fault. Experimental demonstration is presented supporting the model results.« less
NASA Astrophysics Data System (ADS)
Redfield, T. F.; Osmundsen, P. T.
2009-09-01
On February 22, 1756, approximately 15.7 million cubic meters of bedrock were catastrophically released as a giant rockslide into the Langfjorden. Subsequently, three ˜ 40 meter high tsunami waves overwhelmed the village of Tjelle and several other local communities. Inherited structures had isolated a compartment in the hanging wall damage zone of the fjord-dwelling Tjellefonna fault. Because the region is seismically active in oblique-normal mode, and in accordance with scant historical sources, we speculate that an earthquake on a nearby fault may have caused the already-weakened Tjelle hillside to fail. From interpretation of structural, geomorphic, and thermo-chronological data we suggest that today's escarpment topography of Møre og Trøndelag is controlled to a first order by post-rift reactivation of faults parallel to the Mesozoic passive margin. In turn, a number of these faults reactivated Late Caledonian or early post-Caledonian fabrics. Normal-sense reactivation of inherited structures along much of coastal Norway suggests that a structural link exists between the processes that destroy today's mountains and those that created them. The Paleozoic Møre-Trøndelag Fault Complex was reactivated as a normal fault during the Mesozoic and, probably, throughout the Cenozoic until the present day. Its NE-SW trending strands crop out between the coast and the base of a c. 1.7 km high NW-facing topographic 'Great Escarpment.' Well-preserved kinematic indicators and multiple generations of fault products are exposed along the Tjellefonna fault, a well-defined structural and topographic lineament parallel to both the Langfjorden and the Great Escarpment. The slope instability that was formerly present at Tjelle, and additional instabilities currently present throughout the region, may be viewed as the direct product of past and ongoing development of tectonic topography in Møre og Trøndelag county. In the Langfjorden region in particular, structural geometry suggests additional unreleased rock compartments may be isolated and under normal fault control. Although post-glacial rebound and topographically-derived horizontal spreading stresses might in part help drive present-day oblique normal seismicity, the normal-fault-controlled escarpments of Norway were at least partly erected in pre-glacial times. Cretaceous to Early Tertiary post-rift subsidence was interrupted by normal faulting at the innermost portion of the passive margin, imposing a strong tectonic empreinte on the developing landscape.
Operations management system advanced automation: Fault detection isolation and recovery prototyping
NASA Technical Reports Server (NTRS)
Hanson, Matt
1990-01-01
The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.
Control and protection system for paralleled modular static inverter-converter systems
NASA Technical Reports Server (NTRS)
Birchenough, A. G.; Gourash, F.
1973-01-01
A control and protection system was developed for use with a paralleled 2.5-kWe-per-module static inverter-converter system. The control and protection system senses internal and external fault parameters such as voltage, frequency, current, and paralleling current unbalance. A logic system controls contactors to isolate defective power conditioners or loads. The system sequences contactor operation to automatically control parallel operation, startup, and fault isolation. Transient overload protection and fault checking sequences are included. The operation and performance of a control and protection system, with detailed circuit descriptions, are presented.
Neocrystallization, fabrics and age of clay minerals from an exposure of the Moab Fault, Utah
Solum, J.G.; van der Pluijm, B.A.; Peacor, D.R.
2005-01-01
Pronounced changes in clay mineral assemblages are preserved along the Moab Fault (Utah). Gouge is enriched up to ???40% in 1Md illite relative to protolith, whereas altered protolith in the damage zone is enriched ???40% in illite-smectite relative to gouge and up to ???50% relative to protolith. These mineralogical changes indicate that clay gouge is formed not solely through mechanical incorporation of protolith, but also through fault-related authigenesis. The timing of mineralization is determined using 40Ar/39Ar dating of size fractions of fault rocks with varying detrital and authigenic clay content. We applied Ar dating of illite-smectite samples, as well as a newer approach that uses illite polytypes. Our analysis yields overlapping, early Paleocene ages for neoformed (1Md) gouge illite (63??2 Ma) and illite-smectite in the damage zone (60??2 Ma), which are compatible with results elsewhere. These ages represent the latest period of major fault motion, and demonstrate that the fault fabrics are not the result of recent alteration. The clay fabrics in fault rocks are poorly developed, indicating that fluids were not confined to the fault zone by preferentially oriented clays; rather we propose that fluids in the illite-rich gouge were isolated by adjacent lower permeability, illite-smectite-bearing rocks in the damage zone. ?? 2005 Elsevier Ltd. All rights reserved.
Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System
NASA Technical Reports Server (NTRS)
Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.
2006-01-01
The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.
Verification of an IGBT Fusing Switch for Over-current Protection of the SNS HVCM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benwell, Andrew; Kemp, Mark; Burkhart, Craig
2010-06-11
An IGBT based over-current protection system has been developed to detect faults and limit the damage caused by faults in high voltage converter modulators. During normal operation, an IGBT enables energy to be transferred from storage capacitors to a H-bridge. When a fault occurs, the over-current protection system detects the fault, limits the fault current and opens the IGBT to isolate the remaining stored energy from the fault. This paper presents an experimental verification of the over-current protection system under applicable conditions.
Magma-tectonic Interaction at Laguna del Maule, Chile
NASA Astrophysics Data System (ADS)
Keranen, K. M.; Peterson, D. E.; Miller, C. A.; Garibaldi, N.; Tikoff, B.; Williams-Jones, G.
2016-12-01
The Laguna del Maule Volcanic Field (LdM), Chile, the largest concentration of rhyolite <20 kyr globally, exhibits crustal deformation at rates higher than any non-erupting volcano. The interaction of large magmatic systems with faulting is poorly understood, however, the Chaitén rhyolitic system demonstrated that faults can serve as magma pathways during an eruption. We present a complex fault system at LdM in close proximity to the magma reservoir. In March 2016, 18 CHIRP seismic reflection lines were acquired at LdM to identify faults and analyze potential spatial and temporal impacts of the fault system on volcanic activity. We mapped three key horizons on each line, bounding sediment packages between Holocene onset, 870 ybp, and the present date. Faults were mapped on each line and offset was calculated across key horizons. Our results indicate a system of normal-component faults in the northern lake sector, striking subparallel to the mapped Troncoso Fault SW of the lake. These faults correlate to prominent magnetic lineations mapped by boat magnetic data acquired February 2016 which are interpreted as dykes intruding along faults. We also imaged a vertical fault, interpreted as a strike-slip fault, and a series of normal faults in the SW lake sector near the center of magmatic inflation. Isochron and fault offset maps illuminate areas of growth strata and indicate migration and increase of fault activity from south to north through time. We identify a domal structure in the SW lake sector, coincident with an area of low magnetization, in the region of maximum deformation from InSAR results. The dome experienced 10 ms TWT ( 10 meters) of uplift throughout the past 16 kybp, which we interpret as magmatic inflation in a shallow magma reservoir. This inflation is isolated to a 1.5 km diameter region in the hanging wall of the primary normal fault system, indicating possible fault-facilitated inflation.
NASA Space Flight Vehicle Fault Isolation Challenges
NASA Technical Reports Server (NTRS)
Neeley, James R.; Jones, James V.; Bramon, Christopher J.; Inman, Sharon K.; Tuttle, Loraine
2016-01-01
The Space Launch System (SLS) is the new NASA heavy lift launch vehicle in development and is scheduled for its first mission in 2018.SLS has many of the same logistics challenges as any other large scale program. However, SLS also faces unique challenges related to testability. This presentation will address the SLS challenges for diagnostics and fault isolation, along with the analyses and decisions to mitigate risk..
NASA Astrophysics Data System (ADS)
Calugaru, Vladimir
This dissertation pursues three main objectives: (1) to investigate the seismic response of tall reinforced concrete core wall buildings, designed following current building codes, subjected to pulse type near-fault ground motion, with special focus on the relation between the characteristics of the ground motion and the higher-modes of response; (2) to determine the characteristics of a base isolation system that results in nominally elastic response of the superstructure of a tall reinforced concrete core wall building at the maximum considered earthquake level of shaking; and (3) to demonstrate that the seismic performance, cost, and constructability of a base-isolated tall reinforced concrete core wall building can be significantly improved by incorporating a rocking core-wall in the design. First, this dissertation investigates the seismic response of tall cantilever wall buildings subjected to pulse type ground motion, with special focus on the relation between the characteristics of ground motion and the higher-modes of response. Buildings 10, 20, and 40 stories high were designed such that inelastic deformation was concentrated at a single flexural plastic hinge at their base. Using nonlinear response history analysis, the buildings were subjected to near-fault seismic ground motions as well as simple close-form pulses, which represented distinct pulses within the ground motions. Euler-Bernoulli beam models with lumped mass and lumped plasticity were used to model the buildings. Next, this dissertation investigates numerically the seismic response of six seismically base-isolated (BI) 20-story reinforced concrete buildings and compares their response to that of a fixed-base (FB) building with a similar structural system above ground. Located in Berkeley, California, 2 km from the Hayward fault, the buildings are designed with a core wall that provides most of the lateral force resistance above ground. For the BI buildings, the following are investigated: two isolation systems (both implemented below a three-story basement), isolation periods equal to 4, 5, and 6 s, and two levels of flexural strength of the wall. The first isolation system combines tension-resistant friction pendulum bearings and nonlinear fluid viscous dampers (NFVDs); the second combines low-friction tension-resistant cross-linear bearings, lead-rubber bearings, and NFVDs. Finally, this dissertation investigates the seismic response of four 20-story buildings hypothetically located in the San Francisco Bay Area, 0.5 km from the San Andreas fault. One of the four studied buildings is fixed-base (FB), two are base-isolated (BI), and one uses a combination of base isolation and a rocking core wall (BIRW). Above the ground level, a reinforced concrete core wall provides the majority of the lateral force resistance in all four buildings. The FB and BI buildings satisfy requirements of ASCE 7-10. The BI and BIRW buildings use the same isolation system, which combines tension-resistant friction pendulum bearings and nonlinear fluid viscous dampers. The rocking core-wall includes post-tensioning steel, buckling-restrained devices, and at its base is encased in a steel shell to maximize confinement of the concrete core. The total amount of longitudinal steel in the wall of the BIRW building is 0.71 to 0.87 times that used in the BI buildings. Response history two-dimensional analysis is performed, including the vertical components of excitation, for a set of ground motions scaled to the design earthquake and to the maximum considered earthquake (MCE). While the FB building at MCE level of shaking develops inelastic deformations and shear stresses in the wall that may correspond to irreparable damage, the BI and the BIRW buildings experience nominally elastic response of the wall, with floor accelerations and shear forces which are 0.36 to 0.55 times those experienced by the FB building. The response of the four buildings to two historical and two simulated near-fault ground motions is also studied, demonstrating that the BIRW building has the largest deformation capacity at the onset of structural damage. (Abstract shortened by UMI.).
Sensor fault detection and isolation system for a condensation process.
Castro, M A López; Escobar, R F; Torres, L; Aguilar, J F Gómez; Hernández, J A; Olivares-Peregrino, V H
2016-11-01
This article presents the design of a sensor Fault Detection and Isolation (FDI) system for a condensation process based on a nonlinear model. The condenser is modeled by dynamic and thermodynamic equations. For this work, the dynamic equations are described by three pairs of differential equations which represent the energy balance between the fluids. The thermodynamic equations consist in algebraic heat transfer equations and empirical equations, that allow for the estimation of heat transfer coefficients. The FDI system consists of a bank of two nonlinear high-gain observers, in order to detect, estimate and to isolate the fault in any of both outlet temperature sensors. The main contributions of this work were the experimental validation of the condenser nonlinear model and the FDI system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Gerald-Yamasaki, Michael; Trent, Robert P.
2009-01-01
As part of the FDIR (Fault Detection, Isolation, and Recovery) Project for the Constellation Program, a task was designed within the context of the Constellation Program FDIR project called the Legacy Benchmarking Task to document as accurately as possible the FDIR processes and resources that were used by the Space Shuttle ground support equipment (GSE) during the Shuttle flight program. These results served as a comparison with results obtained from the new FDIR capability. The task team assessed Shuttle and EELV (Evolved Expendable Launch Vehicle) historical data for GSE-related launch delays to identify expected benefits and impact. This analysis included a study of complex fault isolation situations that required a lengthy troubleshooting process. Specifically, four elements of that system were considered: LH2 (liquid hydrogen), LO2 (liquid oxygen), hydraulic test, and ground special power.
Intelligent fault isolation and diagnosis for communication satellite systems
NASA Technical Reports Server (NTRS)
Tallo, Donald P.; Durkin, John; Petrik, Edward J.
1992-01-01
Discussed here is a prototype diagnosis expert system to provide the Advanced Communication Technology Satellite (ACTS) System with autonomous diagnosis capability. The system, the Fault Isolation and Diagnosis EXpert (FIDEX) system, is a frame-based system that uses hierarchical structures to represent such items as the satellite's subsystems, components, sensors, and fault states. This overall frame architecture integrates the hierarchical structures into a lattice that provides a flexible representation scheme and facilitates system maintenance. FIDEX uses an inexact reasoning technique based on the incrementally acquired evidence approach developed by Shortliffe. The system is designed with a primitive learning ability through which it maintains a record of past diagnosis studies.
NASA Astrophysics Data System (ADS)
Naderi, E.; Khorasani, K.
2018-02-01
In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.
NASA Astrophysics Data System (ADS)
Mazza, Mirko
2015-12-01
Reinforced concrete (r.c.) framed buildings designed in compliance with inadequate seismic classifications and code provisions present in many cases a high vulnerability and need to be retrofitted. To this end, the insertion of a base isolation system allows a considerable reduction of the seismic loads transmitted to the superstructure. However, strong near-fault ground motions, which are characterised by long-duration horizontal pulses, may amplify the inelastic response of the superstructure and induce a failure of the isolation system. The above considerations point out the importance of checking the effectiveness of different isolation systems for retrofitting a r.c. framed structure. For this purpose, a numerical investigation is carried out with reference to a six-storey r.c. framed building, which, primarily designed (as to be a fixed-base one) in compliance with the previous Italian code (DM96) for a medium-risk seismic zone, has to be retrofitted by insertion of an isolation system at the base for attaining performance levels imposed by the current Italian code (NTC08) in a high-risk seismic zone. Besides the (fixed-base) original structure, three cases of base isolation are studied: elastomeric bearings acting alone (e.g. HDLRBs); in-parallel combination of elastomeric and friction bearings (e.g. high-damping-laminated-rubber bearings, HDLRBs and steel-PTFE sliding bearings, SBs); friction bearings acting alone (e.g. friction pendulum bearings, FPBs). The nonlinear analysis of the fixed-base and base-isolated structures subjected to horizontal components of near-fault ground motions is performed for checking plastic conditions at the potential critical (end) sections of the girders and columns as well as critical conditions of the isolation systems. Unexpected high values of ductility demand are highlighted at the lower floors of all base-isolated structures, while re-centring problems of the base isolation systems under near-fault earthquakes are expected in case of friction bearings acting alone (i.e. FPBs) or that in combination (i.e. SBs) with HDLRBs.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; Arrowsmith, Ramon
2010-05-01
Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is observed in laboratory friction experiments and expressed in an [a-b] term in Rate-State-Friction (RSF) theory. Patches in the seismic zone are incrementally loaded during the interseismic phase. An earthquake initiates if shear stress along at least one (seismic) patch exceeds its static frictional strength and may grow in size due to elastic interaction with other fault patches (static stress transfer). Aside from investigating slip-rate variations due to the elastic interactions within a fault system with this tool, we want to show how such modeling results can be very useful in exploring the physics underlying the patterns that the paleoseismology sees and that those methods (simulation and observations) can be merged, with both making important contributions. Using FIMozFric, we generated synthetic seismic records for a large number of fault geometries and structural scenarios to investigate along-fault slip accumulation patterns and the variability of slip at a point. Our simulations show that fault geometric complexity and the accompanied fault interactions and multi-fault ruptures may cause temporal deviations from the average fault slip-rate, in other words phases of earthquake clustering or relative quiescence. Slip-rates along faults within an interacting fault system may change even when the loading function (stressing rate) remains constant and the magnitude of slip rate change is suggested to be proportional to the magnitude of fault interaction. Thus, spatially isolated and structurally mature faults are expected to experience less slip-rate changes than strongly interacting and less mature faults. The magnitude of slip-rate change may serve as a proxy for the magnitude of fault interaction and vice versa.
NASA Astrophysics Data System (ADS)
Arriola, David; Thielecke, Frank
2017-09-01
Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.
Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms
NASA Technical Reports Server (NTRS)
Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane
2005-01-01
To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.
NASA Astrophysics Data System (ADS)
Khan, Umer Amir; Lee, Jong-Geon; Seo, In-Jin; Amir, Faisal; Lee, Bang-Wook
2015-11-01
Voltage source converter-based HVDC systems (VSC-HVDC) are a better alternative than conventional thyristor-based HVDC systems, especially for developing multi-terminal HVDC systems (MTDC). However, one of the key obstacles in developing MTDC is the absence of an adequate protection system that can quickly detect faults, locate the faulty line and trip the HVDC circuit breakers (DCCBs) to interrupt the DC fault current. In this paper, a novel hybrid-type superconducting circuit breaker (SDCCB) is proposed and feasibility analyses of its application in MTDC are presented. The SDCCB has a superconducting fault current limiter (SFCL) located in the main current path to limit fault currents until the final trip signal is received. After the trip signal the IGBT located in the main line commutates the current into a parallel line where DC current is forced to zero by the combination of IGBTs and surge arresters. Fault simulations for three-, four- and five-terminal MTDC were performed and SDCCB performance was evaluated in these MTDC. Passive current limitation by SFCL caused a significant reduction of fault current interruption stress in the SDCCB. It was observed that the DC current could change direction in MTDC after a fault and the SDCCB was modified to break the DC current in both the forward and reverse directions. The simulation results suggest that the proposed SDCCB could successfully suppress the DC fault current, cause a timely interruption, and isolate the faulty HVDC line in MTDC.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
NASA Astrophysics Data System (ADS)
Villalobos, A.
2015-12-01
On 2007 April 21, a Mw = 6.2 earthquake hit the Aysén region, an area of low seismicity in southern Chile. This event corresponds to the main shock of a sequence of earthquakes that were felt from January 10, with a small earthquake of magnitude ML <3, to February 2008 as recurrent aftershocks. This area is characterized by the presence of the Liquiñe-Ofqui Fault System (LOFS), which corresponds to neotectonic feature and the main seismotectonic southern Chile. In this research we use improved sub-aqueous paleoseismological techniques with geomorphological evidence to constrain the seismogenic source of this event as cortical origin. It is established that the Punta Cola Fault, a dextral-reverse structure which exhibits in seismic profiles a complex fault zone with distinguished positive flower geometry, is responsible for the main shock. This fault caused vertical offsets that reached the seafloor generating fault scarps in a mass movement deposit triggered by the same earthquake. Following this idea, a model of surface rupture is proposed for this structure. Further evidence that this cortical phenomenon is not an isolated event in time is presented by paleoseismological trench-like mappings in sub-bottom profiles.
TES: A modular systems approach to expert system development for real-time space applications
NASA Technical Reports Server (NTRS)
Cacace, Ralph; England, Brenda
1988-01-01
A major goal of the Space Station era is to reduce reliance on support from ground based experts. The development of software programs using expert systems technology is one means of reaching this goal without requiring crew members to become intimately familiar with the many complex spacecraft subsystems. Development of an expert systems program requires a validation of the software with actual flight hardware. By combining accurate hardware and software modelling techniques with a modular systems approach to expert systems development, the validation of these software programs can be successfully completed with minimum risk and effort. The TIMES Expert System (TES) is an application that monitors and evaluates real time data to perform fault detection and fault isolation tasks as they would otherwise be carried out by a knowledgeable designer. The development process and primary features of TES, a modular systems approach, and the lessons learned are discussed.
Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza
2018-02-27
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.
Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza
2018-01-01
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434
Spacecraft fault tolerance: The Magellan experience
NASA Technical Reports Server (NTRS)
Kasuda, Rick; Packard, Donna Sexton
1993-01-01
Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.
Smoothing of Fault Slip Surfaces by Scale Invariant Wear
NASA Astrophysics Data System (ADS)
Dascher-Cousineau, K.; Kirkpatrick, J. D.
2017-12-01
Fault slip surface roughness plays a determining role in the overall strength, friction, and dynamic behavior of fault systems. Previous wear models and field observations suggest that roughness decreases with increasing displacement. However, measurements have yet to isolate the effect of displacement from other possible controls, such as lithology or tectonic setting. In an effort to understand the effect of displacement, we present comprehensive qualitative and quantitative description of the evolution of fault slip surfaces in and around the San-Rafael Desert, S.E. Utah, United States. In the study area, faults accommodated regional extension at shallow (1 to 3 km) depth and are hosted in the massive, well-sorted, high-porosity Navajo and Entrada sandstones. Existing displacement profiles along with tight displacement controls readily measureable in the field, combined with uniform lithology and tectonic history, allowed us to isolate for the effect of displacement during the embryonic stages of faulting (0 to 60 m in displacement). Our field observations indicate a clear compositional and morphological progression from isolated joints or deformation bands towards smooth, continuous, and mirror-like fault slip surfaces with increasing displacement. We scanned pristine slip surfaces with a white light interferometer, a laser scanner, and a ground-based LiDAR. We produce and analyses more than 120 individual scans of fault slip surfaces. Results for the surfaces with the best displacement constraints indicate that roughness as defined by the power spectral density at any given length scale decreases with displacement according to a power law with an exponent of -1. Roughness measurements associated with only maximum constraints on displacements corroborate this result. Moreover, maximum roughness for any given fault is bounded by a primordial roughness corresponding to that of joint surfaces and deformation band edges. Building upon these results, we propose a multi-scale wear model to explain the evolution of faults with displacement. We suggest that together, asperity failure as a scale invariant process, and the stochastic strength of host rocks are consistent with qualitative and quantitative observational constraints made in this study.
Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand
NASA Astrophysics Data System (ADS)
Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.
2013-12-01
Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine Fault can inhibit rupture propagation, producing a soft earthquake segment boundary. The study demonstrates the utility of lakes as paleoseismometers that can be used to reconstruct the spatial and temporal patterns of earthquakes on a fault.
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
Voltage Based Detection Method for High Impedance Fault in a Distribution System
NASA Astrophysics Data System (ADS)
Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama
2016-09-01
High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0
NASA Technical Reports Server (NTRS)
Lancraft, R. E.
1985-01-01
Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.
Fault detection and diagnosis in a spacecraft attitude determination system
NASA Astrophysics Data System (ADS)
Pirmoradi, F. N.; Sassani, F.; de Silva, C. W.
2009-09-01
This paper presents a new scheme for fault detection and diagnosis (FDD) in spacecraft attitude determination (AD) sensors. An integrated attitude determination system, which includes measurements of rate and angular position using rate gyros and vector sensors, is developed. Measurement data from all sensors are fused by a linearized Kalman filter, which is designed based on the system kinematics, to provide attitude estimation and the values of the gyro bias. Using this information the erroneous sensor measurements are corrected, and unbounded sensor measurement errors are avoided. The resulting bias-free data are used in the FDD scheme. The FDD algorithm uses model-based state estimation, combining the information from the rotational dynamics and kinematics of a spacecraft with the sensor measurements to predict the future sensor outputs. Fault isolation is performed through extended Kalman filters (EKFs). The innovation sequences of EKFs are monitored by several statistical tests to detect the presence of a failure and to localize the failures in all AD sensors. The isolation procedure is developed in two phases. In the first phase, two EKFs are designed, which use subsets of measurements to provide state estimates and form residuals, which are used to verify the source of the fault. In the second phase of isolation, testing of multiple hypotheses is performed. The generalized likelihood ratio test is utilized to identify the faulty components. In the scheme developed in this paper a relatively small number of hypotheses is used, which results in faster isolation and highly distinguishable fault signatures. An important feature of the developed FDD scheme is that it can provide attitude estimations even if only one type of sensors is functioning properly.
On-board fault management for autonomous spacecraft
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne
1991-01-01
The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.
Tectono-stratigraphic evolution of normal fault zones: Thal Fault Zone, Suez Rift, Egypt
NASA Astrophysics Data System (ADS)
Leppard, Christopher William
The evolution of linkage of normal fault populations to form continuous, basin bounding normal fault zones is recognised as an important control on the stratigraphic evolution of rift-basins. This project aims to investigate the temporal and spatial evolution of normal fault populations and associated syn-rift deposits from the initiation of early-formed, isolated normal faults (rift-initiation) to the development of a through-going fault zone (rift-climax) by documenting the tectono-stratigraphic evolution of the Sarbut EI Gamal segment of the exceptionally well-exposed Thai fault zone, Suez Rift, Egypt. A number of dated stratal surfaces mapped around the syn-rift depocentre of the Sarbut El Gamal segment allow constraints to be placed on the timing and style of deformation, and the spatial variability of facies along this segment of the fault zone. Data collected indicates that during the first 3.5 My of rifting the structural style was characterised by numerous, closely spaced, short (< 3 km), low displacement (< 200 m) synthetic and antithetic normal faults within 1 - 2 km of the present-day fault segment trace, accommodating surface deformation associated with the development of a fault propagation monocline above the buried, pre-cursor strands of the Sarbut El Gamal fault segment. The progressive localisation of displacement onto the fault segment during rift-climax resulted in the development of a major, surface-breaking fault 3.5 - 5 My after the onset of rifting and is recorded by the death of early-formed synthetic and antithetic faults up-section, and thickening of syn-rift strata towards the fault segment. The influence of intrabasinal highs at the tips of the Sarbut EI Gamal fault segment on the pre-rift sub-crop level, combined with observations from the early-formed structures and coeval deposits suggest that the overall length of the fault segment was fixed from an early stage. The fault segment is interpreted to have grown through rapid lateral propagation and early linkage of the precursor fault strands at depth before the fault segment broke surface, followed by the accumulation of displacement on the linked fault segment with minimal lateral propagation. This style of fault growth contrasts conventional fault growth models by which growth occurs through incremental increases in both displacement and length through time. The evolution of normal fault populations and fault zones exerts a first- order control on basin physiography and sediment supply, and therefore, the architecture and distribution of coeval syn-rift stratigraphy. The early syn-rift continental, Abu Zenima Formation, to shallow marine, Nukhul Formation show a pronounced westward increase in thickness controlled by the series of synthetic and antithetic faults up to 3 km west of present day Thai fault. The orientation of these faults controlled the location of fluvial conglomerates, sandstones and mudstones that shifted to the topographic lows created. The progressive localisation of displacement onto the Sarbut El Gamal fault segment during rift-climax resulted in an overall change in basin geometry. Accelerated subsidence rates led to sedimentation rates being outpaced by subsidence resulting in the development of a marine, sediment-starved, underfilled hangingwall depocentre characterised by slope-to-basinal depositional environments, with a laterally continuous slope apron in the immediate hangingwall, and point-sourced submarine fans. Controls on the spatial distribution, three dimensional architecture, and facies stacking patterns of coeval syn-rift deposits are identified as: I) structural style of the evolution and linkage of normal fault populations, ii) basin physiography, iii) evolution of drainage catchments, iv) bedrock lithology, and v) variations in sea/lake level.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2005-01-01
In-flight sensor fault detection and isolation (FDI) is critical to maintaining reliable engine operation during flight. The aircraft engine control system, which computes control commands on the basis of sensor measurements, operates the propulsion systems at the demanded conditions. Any undetected sensor faults, therefore, may cause the control system to drive the engine into an undesirable operating condition. It is critical to detect and isolate failed sensors as soon as possible so that such scenarios can be avoided. A challenging issue in developing reliable sensor FDI systems is to make them robust to changes in engine operating characteristics due to degradation with usage and other faults that can occur during flight. A sensor FDI system that cannot appropriately account for such scenarios may result in false alarms, missed detections, or misclassifications when such faults do occur. To address this issue, an enhanced bank of Kalman filters was developed, and its performance and robustness were demonstrated in a simulation environment. The bank of filters is composed of m + 1 Kalman filters, where m is the number of sensors being used by the control system and, thus, in need of monitoring. Each Kalman filter is designed on the basis of a unique fault hypothesis so that it will be able to maintain its performance if a particular fault scenario, hypothesized by that particular filter, takes place.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
One-man, self-contained CO2 concentrating system
NASA Technical Reports Server (NTRS)
Wynveen, R. A.; Schubert, F. H.; Powell, J. D.
1972-01-01
A program to design, fabricate, and test a 1-man, self-contained, electrochemical CO2 concentrating system is described. The system was designed with electronic controls and instrumentation to regulate performance, to analyze and display performance trends, and to detect and isolate faults. Ground support accessories were included to provide power, fluids, and a Parametric Data Display allowing real time indication of operating status in engineering units.
An experimental study of fault propagation in a jet-engine controller. M.S. Thesis
NASA Technical Reports Server (NTRS)
Choi, Gwan Seung
1990-01-01
An experimental analysis of the impact of transient faults on a microprocessor-based jet engine controller, used in the Boeing 747 and 757 aircrafts is described. A hierarchical simulation environment which allows the injection of transients during run-time and the tracing of their impact is described. Verification of the accuracy of this approach is also provided. A determination of the probability that a transient results in latch, pin or functional errors is made. Given a transient fault, there is approximately an 80 percent chance that there is no impact on the chip. An empirical model to depict the process of error exploration and degeneration in the target system is derived. The model shows that, if no latch errors occur within eight clock cycles, no significant damage is likely to happen. Thus, the overall impact of a transient is well contained. A state transition model is also derived from the measured data, to describe the error propagation characteristics within the chip, and to quantify the impact of transients on the external environment. The model is used to identify and isolate the critical fault propagation paths, the module most sensitive to fault propagation and the module with the highest potential of causing external pin errors.
Method and apparatus for in-situ detection and isolation of aircraft engine faults
NASA Technical Reports Server (NTRS)
Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)
2007-01-01
A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.
Finite Moment Tensors of Southern California Earthquakes
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Chen, P.; Zhao, L.
2003-12-01
We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables; this projection operation effectively corrects the mainshock data for path-related amplitude anomalies in a way similar to, but more flexible than, empirical Green function (EGF) techniques.
Advanced information processing system: Fault injection study and results
NASA Technical Reports Server (NTRS)
Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.
1992-01-01
The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.
Talebi, H A; Khorasani, K; Tafazoli, S
2009-01-01
This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.
FIESTA: An operational decision aid for space network fault isolation
NASA Technical Reports Server (NTRS)
Lowe, Dawn; Quillin, Bob; Matteson, Nadine; Wilkinson, Bill; Miksell, Steve
1987-01-01
The Fault Tolerance Expert System for Tracking and Data Relay Satellite System (TDRSS) Applications (FIESTA) is a fault detection and fault diagnosis expert system being developed as a decision aid to support operations in the Network Control Center (NCC) for NASA's Space Network. The operational objectives which influenced FIESTA development are presented and an overview of the architecture used to achieve these goals are provided. The approach to the knowledge engineering effort and the methodology employed are also presented and illustrated with examples drawn from the FIESTA domain.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, an enhanced on-line diagnostic system which utilizes dual-channel sensor measurements is developed for the aircraft engine application. The enhanced system is composed of a nonlinear on-board engine model (NOBEM), the hybrid Kalman filter (HKF) algorithm, and fault detection and isolation (FDI) logic. The NOBEM provides the analytical third channel against which the dual-channel measurements are compared. The NOBEM is further utilized as part of the HKF algorithm which estimates measured engine parameters. Engine parameters obtained from the dual-channel measurements, the NOBEM, and the HKF are compared against each other. When the discrepancy among the signals exceeds a tolerance level, the FDI logic determines the cause of discrepancy. Through this approach, the enhanced system achieves the following objectives: 1) anomaly detection, 2) component fault detection, and 3) sensor fault detection and isolation. The performance of the enhanced system is evaluated in a simulation environment using faults in sensors and components, and it is compared to an existing baseline system.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Astrophysics Data System (ADS)
Glass, B. J.
1992-10-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Expert systems applied to fault isolation and energy storage management, phase 2
NASA Technical Reports Server (NTRS)
1987-01-01
A user's guide for the Fault Isolation and Energy Storage (FIES) II system is provided. Included are a brief discussion of the background and scope of this project, a discussion of basic and advanced operating installation and problem determination procedures for the FIES II system and information on hardware and software design and implementation. A number of appendices are provided including a detailed specification for the microprocessor software, a detailed description of the expert system rule base and a description and listings of the LISP interface software.
Reports on work in support of NASA's tracking and communication division
NASA Technical Reports Server (NTRS)
Feagin, Terry; Lekkos, Anthony
1991-01-01
This is a report on the research conducted during the period October 1, 1991 through December 31, 1991. The research is divided into two primary areas: (1) generalization of the Fault Isolation using Bit Strings (FIBS) technique to permit fuzzy information to be used to isolate faults in the tracking and communications system of the Space Station; and (2) a study of the activity that should occur in the on board systems in order to attempt to recover from failures that are external to the Space Station.
A Negative Selection Immune System Inspired Methodology for Fault Diagnosis of Wind Turbines.
Alizadeh, Esmaeil; Meskin, Nader; Khorasani, Khashayar
2017-11-01
High operational and maintenance costs represent as major economic constraints in the wind turbine (WT) industry. These concerns have made investigation into fault diagnosis of WT systems an extremely important and active area of research. In this paper, an immune system (IS) inspired methodology for performing fault detection and isolation (FDI) of a WT system is proposed and developed. The proposed scheme is based on a self nonself discrimination paradigm of a biological IS. Specifically, the negative selection mechanism [negative selection algorithm (NSA)] of the human body is utilized. In this paper, a hierarchical bank of NSAs are designed to detect and isolate both individual as well as simultaneously occurring faults common to the WTs. A smoothing moving window filter is then utilized to further improve the reliability and performance of the FDI scheme. Moreover, the performance of our proposed scheme is compared with another state-of-the-art data-driven technique, namely the support vector machines (SVMs) to demonstrate and illustrate the superiority and advantages of our proposed NSA-based FDI scheme. Finally, a nonparametric statistical comparison test is implemented to evaluate our proposed methodology with that of the SVM under various fault severities.
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
Integrated Maintenance Information System (IMIS): A Maintenance Information Delivery Concept.
1987-11-01
InterFace Figure 2. Portable Maintenance Computer Concept. provide advice for difficult fault-isolation problems . The technician will be able to accomplish...faced with an ever-growing number of paper-based technical orders (TOs). This has greatly increased costs and distribution problems . In addition, it has...compounded problems associ- ated with ensuring accurate data and the lengthy correction times involved. To improve the accuracy of technical data and
Numerical modeling of mountain formation on Io
NASA Astrophysics Data System (ADS)
Turtle, E. P.; Jaeger, W. L.; McEwen, A. S.; Keszthelyi, L.
2000-10-01
Io has ~ 100 mountains [1] that, although often associated with patera [2], do not appear to be volcanic structures. The mountains are up to 16 km high [3] and are generally isolated from each other. We have performed finite-element simulations of the formation of these mountains, investigating several mountain building scenarios: (1) a volcanic construct due to heterogeneous resurfacing on a coherent, homogeneous lithosphere; (2) a volcanic construct on a faulted, homogeneous lithosphere; (3) a volcanic construct on a faulted, homogeneous lithosphere under compression induced by subsidence due to Io's high resurfacing rate; (4) a faulted, homogeneous lithosphere under subsidence-induced compression; (5) a faulted, heterogeneous lithosphere under subsidence-induced compression; and (6) a mantle upwelling beneath a coherent, homogeneous lithosphere under subsidence-induced compression. The models of volcanic constructs do not produce mountains similar to those observed on Io. Neither do those of pervasively faulted lithospheres under compression; these predict a series of tilted lithospheric blocks or plateaus, as opposed to the isolated structures that are observed. Our models show that rising mantle material impinging on the base of the lithosphere can focus the compressional stresses to localize thrust faulting and mountain building. Such faults could also provide conduits along which magma could reach the surface as is observed near several mountains. [1] Carr et al., Icarus 135, pp. 146-165, 1998. [2] McEwen et al., Science 288, pp. 1193-1198, 2000. [3] Schenk and Bulmer, Science 279, pp. 1514-1517, 1998.
Ontology and Knowledgebase of Fractures and Faults
NASA Astrophysics Data System (ADS)
Aydin, A.; Zhong, J.
2007-12-01
Fractures and faults are related to many societal and industrial problems including oil and gas exploration and production, CO2 sequestration, and waste isolation. Therefore, an ontology focusing fractures and faults is desirable to facilitate a sound education and communication among this highly diverse community. We developed an ontology for this field. Some high level classes in our ontology include geological structure, deformation mechanism, and property or factor. Throughout our ontology, we emphasis the relationship among the classes, such as structures formed by mechanisms and properties effect the mechanism that will occur. At this stage, there are about 1,000 classes, referencing about 150 articles or textbook and supplemented by about 350 photographs, diagrams, and illustrations. With limited time and resources, we chose a simple application for our ontology - transforming to a knowledgebase made of a series of web pages. Each web page corresponds to one class in the ontology, having discussion, figures, links to subclass and related concepts, as well as references. We believe that our knowledgebase is a valuable resource for finding information about fractures and faults, to both practicing geologists and students who are interested in the related issues either in application or in education and training.
Fault detection monitor circuit provides ''self-heal capability'' in electronic modules - A concept
NASA Technical Reports Server (NTRS)
Kennedy, J. J.
1970-01-01
Self-checking technique detects defective solid state modules used in electronic test and checkout instrumentation. A ten bit register provides failure monitor and indication for 1023 comparator circuits, and the automatic fault-isolation capability permits the electronic subsystems to be repaired by replacing the defective module.
NASA Astrophysics Data System (ADS)
Alfonsi, L.; Brunori, C. A.; Cinti, F. R.
2014-12-01
The Sybaris town was founded by the Greeks in 720 B.C and its life went on up to the late Roman time (VI-VII century A.D.). The town was located within the Sibari Plain near the Crati River mouth (Ionian northern Calabria, southern Italy). Sybaris occurs in area repeatedly affected by natural damaging phenomena, as frequent flooding, high local subsidence, marine storms, and earthquakes. The 2700 year long record of history of Sybaris stores the traces of these natural events and their influence on the human ancient environment through time. Among the natural disasters, we recognize two Roman age earthquakes striking the town. We isolate the damaging of these seismic events, set their time of occurrence, and map a shear zone crossing the site. These results were obtained through i) survey of coseismic features on the ruins, ii) geoarchaeological stratigraphy analysis, and TL and C14 dating, iii) analysis of high-resolution topographic data (1m pixel LiDAR DEM). The Sybaris town showed a persistent resilience to the earthquakes, and following their occurrences the site was not abandoned but underwent remodeling of the urban topography. The interaction of the different approaches reveals the presence of a previously unknown fault crossing the archeological site, the Sybaris fault. The high-resolution topography allows the characterization of subtle geomorphological features and hydrological anomalies, tracing the fault extension, whose Holocene activity is controlling the local morphology and the present Crati river course.
Model-Based Diagnosis and Prognosis of a Water Recycling System
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Hafiychuk, Vasyl; Goebel, Kai Frank
2013-01-01
A water recycling system (WRS) deployed at NASA Ames Research Center s Sustainability Base (an energy efficient office building that integrates some novel technologies developed for space applications) will serve as a testbed for long duration testing of next generation spacecraft water recycling systems for future human spaceflight missions. This system cleans graywater (waste water collected from sinks and showers) and recycles it into clean water. Like all engineered systems, the WRS is prone to standard degradation due to regular use, as well as other faults. Diagnostic and prognostic applications will be deployed on the WRS to ensure its safe, efficient, and correct operation. The diagnostic and prognostic results can be used to enable condition-based maintenance to avoid unplanned outages, and perhaps extend the useful life of the WRS. Diagnosis involves detecting when a fault occurs, isolating the root cause of the fault, and identifying the extent of damage. Prognosis involves predicting when the system will reach its end of life irrespective of whether an abnormal condition is present or not. In this paper, first, we develop a physics model of both nominal and faulty system behavior of the WRS. Then, we apply an integrated model-based diagnosis and prognosis framework to the simulation model of the WRS for several different fault scenarios to detect, isolate, and identify faults, and predict the end of life in each fault scenario, and present the experimental results.
NASA Astrophysics Data System (ADS)
Zoback, Mark
2017-04-01
In this talk, I will address the likelihood for fault slip to occur in response to fluid injection and the likely magnitude of potentially induced earthquakes. First, I will review a methodology that applies Quantitative Risk Assessment to calculate the probability of a fault exceeding Mohr-Coulomb slip criteria. The methodology utilizes information about the local state of stress, fault strike and dip and the estimated pore pressure perturbation to predict the probability of the fault slip as a function of time. Uncertainties in the input parameters are utilized to assess the probability of slip on known faults due to the predictable pore pressure perturbations. Application to known faults in Oklahoma has been presented by Walsh and Zoback (Geology, 2016). This has been updated with application to the previously unknown faults associated with M >5 earthquakes in the state. Second, I will discuss two geologic factors that limit the magnitudes of earthquakes (either natural or induced) in sedimentary sequences. Fundamentally, the layered nature of sedimentary rocks means that seismogenic fault slip will be limited by i) the velocity strengthening frictional properties of clay- and carbonate-rich rock sequences (Kohli and Zoback, JGR, 2013; in prep) and ii) viscoplastic stress relaxation in rocks with similar composition (Sone and Zoback, Geophysics, 2013a, b; IJRM, 2014; Rassouli and Zoback, in prep). In the former case, if fault slip is triggered in these types of rocks, it would likely be aseismic due the velocity strengthening behavior of faults. In the latter case, the stress relaxation could result in rupture termination in viscoplastic formations. In both cases, the stratified nature of sedimentary rock sequences could limit the magnitude of potentially induced earthquakes. Moreover, even when injection into sedimentary rocks initiates fault slip, earthquakes large enough to cause damage will usually require slip on faults sufficiently large that they extend into basement. This suggests that an important criterion for large-scale CO2 sequestration projects is that the injection zone is isolated from crystalline basement rocks by viscoplastic shales to prevent rupture propagation from extending down into basement.
NASA Astrophysics Data System (ADS)
Babb, A.; Thomas, A.; Bletery, Q.
2017-12-01
Low frequency earthquakes (LFEs) are detected at depths of 16-30 km on a 150 km section of the San Andreas Fault centered at Parkfield, CA. The LFEs are divided into 88 families based on waveform similarity. Each family is thought to represent a brittle asperity on the fault surface that repeatedly slips during aseismic slip of the surrounding fault. LFE occurrence is irregular which allows families to be divided into continuous and episodic. In continuous families a burst of a few LFE events recurs every few days while episodic families experience essentially quiescent periods often lasting months followed by bursts of hundreds of events over a few days. The occurrence of LFEs has also been shown to be sensitive to extremely small ( 1kPa) tidal stress perturbations. However, the clustered nature of LFE occurrence could potentially bias estimates of tidal sensitivity. Here we re-evaluate the tidal sensitivity of LFE families on the deep San Andreas using a declustered catalog. In this catalog LFE bursts are isolated based on the recurrence intervals between individual LFE events for each family. Preliminary analysis suggests that declustered LFE families are still highly sensitive to tidal stress perturbations, primarily right-lateral shear stress (RLSS) and to a lesser extent fault normal stress (FNS). We also find inferred creep episodes initiate preferentially during times of positive RLSS.
A signal-based fault detection and classification method for heavy haul wagons
NASA Astrophysics Data System (ADS)
Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan
2017-12-01
This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.
NASA Astrophysics Data System (ADS)
Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.
2017-12-01
Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.
Towards Certification of a Space System Application of Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Markosian, Lawrence Z.
2008-01-01
Advanced fault detection, isolation and recovery (FDIR) software is being investigated at NASA as a means to the improve reliability and availability of its space systems. Certification is a critical step in the acceptance of such software. Its attainment hinges on performing the necessary verification and validation to show that the software will fulfill its requirements in the intended setting. Presented herein is our ongoing work to plan for the certification of a pilot application of advanced FDIR software in a NASA setting. We describe the application, and the key challenges and opportunities it offers for certification.
Task Identification and Evaluation System (TIES)
1991-08-01
Caliorate A N/AVh-11A- iUD -test -sets 127. Calibrate AN/AWII1-55 ASCU test setsI - 128. Calibrate 5001L11 tally punched tape readersI- 129. Perform...11AKHbD test sets -- 132. ?erform fault isolation of U4/AWN-55 ASCU -test sets -- 133. Perform fault isolation of 500 R.M tally punched tape I...AIN/AVM1-11A HfLM test sets- 137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS 138. Adjust SM-661/AS-388air
Advanced instrumentation concepts for environmental control subsystems
NASA Technical Reports Server (NTRS)
Yang, P. Y.; Schubert, F. H.; Gyorki, J. R.; Wynveen, R. A.
1978-01-01
Design, evaluation and demonstration of advanced instrumentation concepts for improving performance of manned spacecraft environmental control and life support systems were successfully completed. Concepts to aid maintenance following fault detection and isolation were defined. A computer-guided fault correction instruction program was developed and demonstrated in a packaged unit which also contains the operator/system interface.
Galileo spacecraft power distribution and autonomous fault recovery
NASA Technical Reports Server (NTRS)
Detwiler, R. C.
1982-01-01
There is a trend in current spacecraft design to achieve greater fault tolerance through the implemenation of on-board software dedicated to detecting and isolating failures. A combination of hardware and software is utilized in the Galileo power system for autonomous fault recovery. Galileo is a dual-spun spacecraft designed to carry a number of scientific instruments into a series of orbits around the planet Jupiter. In addition to its self-contained scientific payload, it will also carry a probe system which will be separated from the spacecraft some 150 days prior to Jupiter encounter. The Galileo spacecraft is scheduled to be launched in 1985. Attention is given to the power system, the fault protection requirements, and the power fault recovery implementation.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Aircraft Engine Sensor/Actuator/Component Fault Diagnosis Using a Bank of Kalman Filters
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L. (Technical Monitor)
2003-01-01
In this report, a fault detection and isolation (FDI) system which utilizes a bank of Kalman filters is developed for aircraft engine sensor and actuator FDI in conjunction with the detection of component faults. This FDI approach uses multiple Kalman filters, each of which is designed based on a specific hypothesis for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, from which a specific fault is isolated. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The performance of the FDI system is evaluated against a nonlinear engine simulation for various engine faults at cruise operating conditions. In order to mimic the real engine environment, the nonlinear simulation is executed not only at the nominal, or healthy, condition but also at aged conditions. When the FDI system designed at the healthy condition is applied to an aged engine, the effectiveness of the FDI system is impacted by the mismatch in the engine health condition. Depending on its severity, this mismatch can cause the FDI system to generate incorrect diagnostic results, such as false alarms and missed detections. To partially recover the nominal performance, two approaches, which incorporate information regarding the engine s aging condition in the FDI system, will be discussed and evaluated. The results indicate that the proposed FDI system is promising for reliable diagnostics of aircraft engines.
Kasagi, M; Fujita, K; Tsuji, M; Takewaki, I
2016-02-01
A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency) earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency) and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.
NASA Technical Reports Server (NTRS)
Jammu, V. B.; Danai, K.; Lewicki, D. G.
1998-01-01
This paper presents the experimental evaluation of the Structure-Based Connectionist Network (SBCN) fault diagnostic system introduced in the preceding article. For this vibration data from two different helicopter gearboxes: OH-58A and S-61, are used. A salient feature of SBCN is its reliance on the knowledge of the gearbox structure and the type of features obtained from processed vibration signals as a substitute to training. To formulate this knowledge, approximate vibration transfer models are developed for the two gearboxes and utilized to derive the connection weights representing the influence of component faults on vibration features. The validity of the structural influences is evaluated by comparing them with those obtained from experimental RMS values. These influences are also evaluated ba comparing them with the weights of a connectionist network trained though supervised learning. The results indicate general agreement between the modeled and experimentally obtained influences. The vibration data from the two gearboxes are also used to evaluate the performance of SBCN in fault diagnosis. The diagnostic results indicate that the SBCN is effective in directing the presence of faults and isolating them within gearbox subsystems based on structural influences, but its performance is not as good in isolating faulty components, mainly due to lack of appropriate vibration features.
Fault Tolerance Middleware for a Multi-Core System
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.
2012-01-01
Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.
Model-Based Fault Diagnosis for Turboshaft Engines
NASA Technical Reports Server (NTRS)
Green, Michael D.; Duyar, Ahmet; Litt, Jonathan S.
1998-01-01
Tests are described which, when used to augment the existing periodic maintenance and pre-flight checks of T700 engines, can greatly improve the chances of uncovering a problem compared to the current practice. These test signals can be used to expose and differentiate between faults in various components by comparing the responses of particular engine variables to the expected. The responses can be processed on-line in a variety of ways which have been shown to reveal and identify faults. The combination of specific test signals and on-line processing methods provides an ad hoc approach to the isolation of faults which might not otherwise be detected during pre-flight checkout.
Software-implemented fault insertion: An FTMP example
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1987-01-01
This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2002-01-01
As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.
NASA Technical Reports Server (NTRS)
2001-01-01
Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Using Seismic Interferometry to Investigate Seismic Swarms
NASA Astrophysics Data System (ADS)
Matzel, E.; Morency, C.; Templeton, D. C.
2017-12-01
Seismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Hundreds of small earthquakes often occur along a fault during a seismic swarm. This seismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the fault, itself. Here we focus on two methods of seismic interferometry, ambient noise correlation (ANC) and the virtual seismometer method (VSM). ANC is based on the observation that the Earth's background noise includes coherent energy, which can be recovered by observing over long time periods and allowing the incoherent energy to cancel out. The cross correlation of ambient noise between a pair of stations results in a waveform that is identical to the seismogram that would result if an impulsive source located at one of the stations was recorded at the other, the Green function (GF). The calculation of the GF is often stable after a few weeks of continuous data correlation, any perturbations to the GF after that point are directly related to changes in the subsurface and can be used for 4D monitoring.VSM is a style of seismic interferometry that provides fast, precise, high frequency estimates of the Green's function (GF) between earthquakes. VSM illuminates the subsurface precisely where the pressures are changing and has the potential to image the evolution of seismicity over time, including changes in the style of faulting. With hundreds of earthquakes, we can calculate thousands of waveforms. At the same time, VSM collapses the computational domain, often by 2-3 orders of magnitude. This allows us to do high frequency 3D modeling in the fault region. Using data from a swarm of earthquakes near the Salton Sea, we demonstrate the power of these techniques, illustrating our ability to scale from the far field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. We use ANC to create a 3D model of the crust in the region. VSM provides better illumination of the active fault zone. Measures of amplitude and shape are used to refine source properties and locations in space and waveform modeling allows us to estimate near-fault seismic structure.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Detection of arcing location on photovoltaic systems using filters
Johnson, Jay
2018-02-20
The present invention relates to photovoltaic systems capable of identifying the location of an arc-fault. In particular, such systems include a unique filter connected to each photovoltaic (PV) string, thereby providing a unique filtered noise profile associated with a particular PV string. Also described herein are methods for identifying and isolating such arc-faults.
PDSS/IMC requirements and functional specifications
NASA Technical Reports Server (NTRS)
1983-01-01
The system (software and hardware) requirements for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) are provided. The PDSS/IMC system provides the capability for performing Image Motion Compensator Electronics (IMCE) flight software test, checkout, and verification and provides the capability for monitoring the IMC flight computer system during qualification testing for fault detection and fault isolation.
A High Power Solid State Circuit Breaker for Military Hybrid Electric Vehicle Applications
2012-08-01
the SSCB to isolate a fault, breaker opening is latched and can be reset to reclose the breaker via remote logic input. SSCB state and health...rated load current (125 A). Figure 10 shows that after the SSCB detects a fault and opens, it can also be repeatedly reclosed remotely to attempt to
NASA Technical Reports Server (NTRS)
Glass, B. J.; Hack, E. C.
1990-01-01
A knowledge-based control system for real-time control and fault detection, isolation and recovery (FDIR) of a prototype two-phase Space Station Freedom external thermal control system (TCS) is discussed in this paper. The Thermal Expert System (TEXSYS) has been demonstrated in recent tests to be capable of both fault anticipation and detection and real-time control of the thermal bus. Performance requirements were achieved by using a symbolic control approach, layering model-based expert system software on a conventional numerical data acquisition and control system. The model-based capabilities of TEXSYS were shown to be advantageous during software development and testing. One representative example is given from on-line TCS tests of TEXSYS. The integration and testing of TEXSYS with a live TCS testbed provides some insight on the use of formal software design, development and documentation methodologies to qualify knowledge-based systems for on-line or flight applications.
Distributed reconfigurable control strategies for switching topology networked multi-agent systems.
Gallehdari, Z; Meskin, N; Khorasani, K
2017-11-01
In this paper, distributed control reconfiguration strategies for directed switching topology networked multi-agent systems are developed and investigated. The proposed control strategies are invoked when the agents are subject to actuator faults and while the available fault detection and isolation (FDI) modules provide inaccurate and unreliable information on the estimation of faults severities. Our proposed strategies will ensure that the agents reach a consensus while an upper bound on the team performance index is ensured and satisfied. Three types of actuator faults are considered, namely: the loss of effectiveness fault, the outage fault, and the stuck fault. By utilizing quadratic and convex hull (composite) Lyapunov functions, two cooperative and distributed recovery strategies are designed and provided to select the gains of the proposed control laws such that the team objectives are guaranteed. Our proposed reconfigurable control laws are applied to a team of autonomous underwater vehicles (AUVs) under directed switching topologies and subject to simultaneous actuator faults. Simulation results demonstrate the effectiveness of our proposed distributed reconfiguration control laws in compensating for the effects of sudden actuator faults and subject to fault diagnosis module uncertainties and unreliabilities. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Tracer Transport Along a Vertical Fault Located in Welded Tuffs
NASA Astrophysics Data System (ADS)
Salve, R.; Liu, H.; Hu, Q.
2002-12-01
A near-vertical fault that intercepts the fractured welled tuff formation in the underground Exploratory Studies Facility (ESF) at Yucca Mountain, Nevada, has provided a unique opportunity to evaluate important hydrological parameters associated with faults (e.g., flow velocity, matrix diffusion, fault-fracture-matrix interactions). Alcove 8, which intersects the fault is located in the cross drift of the ESF, has been excavated for liquid releases through this fault and a network of fractures. Located 25 m below Alcove 8 in the main drift of the ESF, Niche 3 which also intercepts the fault, serves as the site for monitoring the wetting front and for collecting seepage following liquid releases in Alcove 8. To investigate the importance of matrix diffusion and the extent of area subject to fracture-matrix interactions, we released a mix of conservative tracers (pentafluorobenzoic acid [PFBA] and lithium bromide [LiBr]) along the fault. The ceiling of Niche 3 was blanketed with an array of trays to capture seepage, and seepage rates were continuously monitored by a water collection system connected to the trays. Additionally, a water sampling device, the passive-discreet water sampler (PDWS), was connected to three of the collections trays in Niche 3 into which water was seeping. The PDWS, designed to isolate continuous seepage from each tray into discreet samples for chemical analysis, remained connected to the trays over a period of three months. During this time, all water that seeped into the three trays was captured sequentially into sampling bottles and analyzed for concentrations of PFBA and LiBr. Water released along the fault initially traveled the 25 m vertical distance over a period of 36 days (at a velocity ~0.7 m/day). The seepage recovered in Niche 3 was less than 10% of the injected water with significant spatial and temporal fluctuations in seepage rates. Along a fast flow path, the benzoic tracer (PFBA) and LiBr were first detected ~12 days after they were released into the fault. Along slower flow paths the tracers appeared ~ two weeks later, with PFBA preceding the LiB. The differing travel times of the two conservative tracers suggests the impact of matrix diffusion in the transport process. This work was supported by the Director, Office of Civilian Radioactive Waste Management, U.S. Department of Energy, through Memorandum Purchase Order EA9013MC5X between Bechtel SAIC Company, LLC, and the Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab). The support is provided to Berkeley Lab through the U.S. Department of Energy Contract No. DE-AC03-76SF00098.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2013 CFR
2013-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2014 CFR
2014-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2010 CFR
2010-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2012 CFR
2012-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2011 CFR
2011-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
NASA IVHM Technology Experiment for X-vehicles (NITEX)
NASA Technical Reports Server (NTRS)
Sandra, Hayden; Bajwa, Anupa
2001-01-01
The purpose of the NASA IVHM Technology Experiment for X-vehicles (NITEX) is to advance the development of selected IVHM technologies in a flight environment and to demonstrate the potential for reusable launch vehicle ground processing savings. The technologies to be developed and demonstrated include system-level and detailed diagnostics for real-time fault detection and isolation, prognostics for fault prediction, automated maintenance planning based on diagnostic and prognostic results, and a microelectronics hardware platform. Complete flight The Evolution of Flexible Insulation as IVHM consists of advanced sensors, distributed data acquisition, data processing that includes model-based diagnostics, prognostics and vehicle autonomy for control or suggested action, and advanced data storage. Complete ground IVHM consists of evolved control room architectures, advanced applications including automated maintenance planning and automated ground support equipment. This experiment will advance the development of a subset of complete IVHM.
Intelligent Engine Systems Work Element 1.3: Sub System Health Management
NASA Technical Reports Server (NTRS)
Ashby, Malcolm; Simpson, Jeffrey; Singh, Anant; Ferguson, Emily; Frontera, mark
2005-01-01
The objectives of this program were to develop health monitoring systems and physics-based fault detection models for engine sub-systems including the start, lubrication, and fuel. These models will ultimately be used to provide more effective sub-system fault identification and isolation to reduce engine maintenance costs and engine down-time. Additionally, the bearing sub-system health is addressed in this program through identification of sensing requirements, a review of available technologies and a demonstration of a demonstration of a conceptual monitoring system for a differential roller bearing. This report is divided into four sections; one for each of the subtasks. The start system subtask is documented in section 2.0, the oil system is covered in section 3.0, bearing in section 4.0, and the fuel system is presented in section 5.0.
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig
2017-01-01
This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.
Fault latency in the memory - An experimental study on VAX 11/780
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1986-01-01
Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.
Fault recovery for real-time, multi-tasking computer system
NASA Technical Reports Server (NTRS)
Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)
2011-01-01
System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.
NASA Technical Reports Server (NTRS)
Truong, Long V.; Walters, Jerry L.; Roth, Mary Ellen; Quinn, Todd M.; Krawczonek, Walter M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control to the Space Station Freedom Electrical Power System (SSF/EPS) testbed being developed and demonstrated at NASA Lewis Research Center. The objectives of the program are to establish artificial intelligence technology paths, to craft knowledge-based tools with advanced human-operator interfaces for power systems, and to interface and integrate knowledge-based systems with conventional controllers. The Autonomous Power EXpert (APEX) portion of the APS program will integrate a knowledge-based fault diagnostic system and a power resource planner-scheduler. Then APEX will interface on-line with the SSF/EPS testbed and its Power Management Controller (PMC). The key tasks include establishing knowledge bases for system diagnostics, fault detection and isolation analysis, on-line information accessing through PMC, enhanced data management, and multiple-level, object-oriented operator displays. The first prototype of the diagnostic expert system for fault detection and isolation has been developed. The knowledge bases and the rule-based model that were developed for the Power Distribution Control Unit subsystem of the SSF/EPS testbed are described. A corresponding troubleshooting technique is also described.
System for detecting and limiting electrical ground faults within electrical devices
Gaubatz, Donald C.
1990-01-01
An electrical ground fault detection and limitation system for employment with a nuclear reactor utilizing a liquid metal coolant. Elongate electromagnetic pumps submerged within the liquid metal coolant and electrical support equipment experiencing an insulation breakdown occasion the development of electrical ground fault current. Without some form of detection and control, these currents may build to damaging power levels to expose the pump drive components to liquid metal coolant such as sodium with resultant undesirable secondary effects. Such electrical ground fault currents are detected and controlled through the employment of an isolated power input to the pumps and with the use of a ground fault control conductor providing a direct return path from the affected components to the power source. By incorporating a resistance arrangement with the ground fault control conductor, the amount of fault current permitted to flow may be regulated to the extent that the reactor may remain in operation until maintenance may be performed, notwithstanding the existence of the fault. Monitors such as synchronous demodulators may be employed to identify and evaluate fault currents for each phase of a polyphase power, and control input to the submerged pump and associated support equipment.
A real-time simulator of a turbofan engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.
1989-01-01
A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.
1999-06-18
and 1.54 microns and to compute the spectral extinction coefficient. 3. Near IR (1.54 um) Laser rangefinders measure the time-of-flight of a short...quantitative understanding n n Research ( long term) n Encourage research in adaptive systems : evolutionary programming, genetic algorithms, neural nets... measures , such as false alarm rate , are not measurable in field applications. Other measures such as Incremental Fault Resolution, Operational Isolation
NASA Astrophysics Data System (ADS)
Arai, H.; Ando, R.; Aoki, Y.
2017-12-01
The 2016 Kumamoto earthquake sequence hit the SW Japan, from April 14th to 16th and its sequence includes two M6-class foreshocks and the main shock (Mw 7.0). Importantly, the detailed surface displacement caused solely by the two foreshocks could be captured by a SAR observation isolated from the mainshock deformation. The foreshocks ruptured the previously mapped Hinagu fault and their hypocentral locations and the aftershock distribution indicates the involvement of two different subparallel faults. Therefore we assumed that the 1st and the 2nd foreshocks respectively ruptured each of the subparallel faults (faults A and B). One of the interesting points of this earthquake is that the two major foreshocks had a temporal gap of 2.5 hours even though the fault A and B are quite close by each other. This suggests that the stress perturbation due to the 1st foreshock is not large enough to trigger the 2nd one right away but that it's large enough to bring about the following earthquake after a delay time.We aim to reproduce the foreshock sequence such as rupture jumping over the subparallel faults by using dynamic rupture simulations. We employed a spatiotemporal-boundary integral equation method accelerated by the Fast Domain Partitioning Method (Ando, 2016, GJI) since this method allows us to construct a complex fault geometry in 3D media. Our model has two faults and a free ground surface. We conducted rupture simulation with various sets of parameters to identify the optimal condition describing the observation.Our simulation results are roughly categorized into 3 cases with regard to the criticality for the rupture jumping. The case 1 (supercritical case) shows the fault A and B ruptured consecutively without any temporal gap. In the case 2 (nearly critical), the rupture on the fault B started with a temporal gap after the fault A finished rupturing, which is what we expected as a reproduction. In the case 3 (subcritical), only the fault A ruptured and its rupture did not transfer to the fault B. We succeed in reproducing rupture jumping over two faults with a temporal gap due to the nucleation by taking account of a velocity strengthening (direct) effect. With a detailed analysis of the case 2, we can constrain ranges of parameters strictly, and this gives us deeper insights into the physics underlying the delayed foreshock activity.
Deformation Record Associated To The Valdoviño Fault (Variscan Orogeny, NW Iberia)
NASA Astrophysics Data System (ADS)
Llana-Funez, S.; Fernández, F. J.
2013-12-01
The Valdoviño Fault is a subvertical left-lateral strike-slip fault that exceeding a hundred kms in length formed in the late stages of the Variscan orogeny in NW Iberia. The fault cuts through the pile of allochthonous thrust sheets that conform the suture zone of the orogen and constitutes the eastern boundary of one of them, the Ordenes complex. In the section along the Atlantic coast, the fault core has a thickness of about 100 m in width with foliated rocks showing a subvertical attitude. It is formed by several rock types, beginning from the west these are: coarse grained foliated granitoids, tectonic breccia with fragments of high grade mafic rocks, fine-grained gneiss, serpentinites, fine-grained amphibolites and two-mica granites. The fault zone samples some of the lithologies found to the base of the Ordenes complex, emplaced and deformed prior to the nucleation of the Valdoviño Fault. Intense deformation produces extreme grain comminution particularly in felsic and basic rocks. Planolinear fabrics are predominant, with a subhorizontal lineation. The intensity of the deformation and the reduction in thickness of the various lithotypes is interpreted as indicative of the amount of strain accumulated during its tectonic history. Two types of tectonites stand out along the trace of the fault: the tectonic breccias at the coastal section (nucleated in basic rocks and in serpentinites) and the SC fabrics in syntectonic granitoids. Both evidence different deformation conditions during the activity of the fault. The band of tectonic breccias developed in basic rocks is a few meters thick and has a number of mm-thick ultracataclasites cutting sharply the breccia. The ultracataclasites show one straight side that cuts through the various components of the breccias (either earlier fault rocks as fragments of metabasites). The slipping surfaces all have a subvertical attitude consistent to the current orientation of the major fault. Earlier ultracataclastic bands are fractured and deformed prior to be overprinted by late ultracataclastic bands, indicating that the fracturing process that produces the extreme grain comminution was recurrent and repeated in time. These slipping surfaces show no clear indication about the sense of shear during fast movements, although more distributed cataclastic deformation in between single slip events seem compatible in places with left-lateral movement. The Valdoviño fault is intruded by two types of granitoids: granodiorites and two-mica granites. Courrieux (1984) showed the distribution in map view of sinistral SC fabrics, predominantly in the granitoid to the east of the Valdoviño Fault. Towards the core of the fault zone strain intensity increases to the point of obliterating the S fabric, developing thicker shear zones with extreme grain size reduction. Isolated mica fish and porphyroclasts of feldspar indicate clearly a left-lateral sense of shear. Work in progress aims to relate the timing of the slip events in the basic breccias with respect to the development of ultramilonitic SC fabrics in the granitoids. Ultimately we aim to establish the nature and conditions of tectonic activity along the Valdoviño Fault.
Fault Detection and Isolation for Hydraulic Control
NASA Technical Reports Server (NTRS)
1987-01-01
Pressure sensors and isolation valves act to shut down defective servochannel. Redundant hydraulic system indirectly senses failure in any of its electrical control channels and mechanically isolates hydraulic channel controlled by faulty electrical channel so flat it cannot participate in operating system. With failure-detection and isolation technique, system can sustains two failed channels and still functions at full performance levels. Scheme useful on aircraft or other systems with hydraulic servovalves where failure cannot be tolerated.
Fault evolution in the Potiguar rift termination, equatorial margin of Brazil
NASA Astrophysics Data System (ADS)
de Castro, D. L.; Bezerra, F. H. R.
2015-02-01
The transform shearing between South American and African plates in the Cretaceous generated a series of sedimentary basins on both plate margins. In this study, we use gravity, aeromagnetic, and resistivity surveys to identify architecture of fault systems and to analyze the evolution of the eastern equatorial margin of Brazil. Our study area is the southern onshore termination of the Potiguar rift, which is an aborted NE-trending rift arm developed during the breakup of Pangea. The basin is located along the NNE margin of South America that faces the main transform zone that separates the North and the South Atlantic. The Potiguar rift is a Neocomian structure located at the intersection of the equatorial and western South Atlantic and is composed of a series of NE-trending horsts and grabens. This study reveals new grabens in the Potiguar rift and indicates that stretching in the southern rift termination created a WNW-trending, 10 km wide, and ~ 40 km long right-lateral strike-slip fault zone. This zone encompasses at least eight depocenters, which are bounded by a left-stepping, en echelon system of NW-SE- to NS-striking normal faults. These depocenters form grabens up to 1200 m deep with a rhomb-shaped geometry, which are filled with rift sedimentary units and capped by postrift sedimentary sequences. The evolution of the rift termination is consistent with the right-lateral shearing of the equatorial margin in the Cretaceous and occurs not only at the rift termination but also as isolated structures away from the main rift. This study indicates that the strike-slip shearing between two plates propagated to the interior of one of these plates, where faults with similar orientation, kinematics, geometry, and timing of the major transform are observed. These faults also influence rift geometry.
NASA Astrophysics Data System (ADS)
Miller, N. C.; Lizarralde, D.; McGuire, J.; Hole, J. A.
2006-12-01
We consider methodologies, including survey design and processing algorithms, which are best suited to imaging vertical reflectors in oceanic crust using marine seismic techniques. The ability to image the reflectivity structure of transform faults as a function of depth, for example, may provide new insights into what controls seismicity along these plate boundaries. Turning-wave migration has been used with success to image vertical faults on land. With synthetic datasets we find that this approach has unique difficulties in the deep ocean. The fault-reflected crustal refraction phase (Pg-r) typically used in pre-stack migrations is difficult to isolate in marine seismic data. An "imagable" Pg-r is only observed in a time window between the first arrivals and arrivals from the sediments and the thick, slow water layer at offsets beyond ~25 km. Ocean- bottom seismometers (OBSs), as opposed to a long surface streamer, must be used to acquire data suitable for crustal-scale vertical imaging. The critical distance for Moho reflections (PmP) in oceanic crust is also ~25 km, thus Pg-r and PmP-r are observed with very little separation, and the fault-reflected mantle refraction (Pn-r) arrives prior to Pg-r as the observation window opens with increased OBS-to-fault distance. This situation presents difficulties for "first-arrival" based Kirchoff migration approaches and suggests that wave- equation approaches, which in theory can image all three phases simultaneously, may be more suitable for vertical imaging in oceanic crust. We will present a comparison of these approaches as applied to a synthetic dataset generated from realistic, stochastic velocity models. We will assess their suitability, the migration artifacts unique to the deep ocean, and the ideal instrument layout for such an experiment.
NASA Technical Reports Server (NTRS)
Lee, Harry
1994-01-01
A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
NASA Astrophysics Data System (ADS)
Gold, Ryan D.; Briggs, Richard W.; Crone, Anthony J.; DuRoss, Christopher B.
2017-11-01
Faulted terrace risers are semi-planar features commonly used to constrain Quaternary slip rates along strike-slip faults. These landforms are difficult to date directly and therefore their ages are commonly bracketed by age estimates of the adjacent upper and lower terrace surfaces. However, substantial differences in the ages of the upper and lower terrace surfaces (a factor of 2.4 difference observed globally) produce large uncertainties in the slip-rate estimate. In this investigation, we explore how the full range of displacements and bounding ages from multiple faulted terrace risers can be combined to yield a more accurate fault slip rate. We use 0.25-m cell size digital terrain models derived from airborne lidar data to analyze three sites where terrace risers are offset right-laterally by the Honey Lake fault in NE California, USA. We use ages for locally extensive subhorizontal surfaces to bracket the time of riser formation: an upper surface is the bed of abandoned Lake Lahontan having an age of 15.8 ± 0.6 ka and a lower surface is a fluvial terrace abandoned at 4.7 ± 0.1 ka. We estimate lateral offsets of the risers ranging between 6.6 and 28.3 m (median values), a greater than fourfold difference in values. The amount of offset corresponds to the riser's position relative to modern stream meanders: the smallest offset is in a meander cutbank position, whereas the larger offsets are in straight channel or meander point-bar positions. Taken in isolation, the individual terrace-riser offsets yield slip rates ranging from 0.3 to 7.1 mm/a. However, when the offset values are collectively assessed in a probabilistic framework, we find that a uniform (linear) slip rate of 1.6 mm/a (1.4-1.9 mm/a at 95% confidence) can satisfy the data, within their respective uncertainties. This investigation demonstrates that integrating observations of multiple offset elements (crest, midpoint, and base) from numerous faulted and dated terrace risers at closely spaced sites can refine slip-rate estimates on strike-slip faults.
Gold, Ryan D.; Briggs, Richard; Crone, Anthony J.; Duross, Christopher
2017-01-01
Faulted terrace risers are semi-planar features commonly used to constrain Quaternary slip rates along strike-slip faults. These landforms are difficult to date directly and therefore their ages are commonly bracketed by age estimates of the adjacent upper and lower terrace surfaces. However, substantial differences in the ages of the upper and lower terrace surfaces (a factor of 2.4 difference observed globally) produce large uncertainties in the slip-rate estimate. In this investigation, we explore how the full range of displacements and bounding ages from multiple faulted terrace risers can be combined to yield a more accurate fault slip rate. We use 0.25-m cell size digital terrain models derived from airborne lidar data to analyze three sites where terrace risers are offset right-laterally by the Honey Lake fault in NE California, USA. We use ages for locally extensive subhorizontal surfaces to bracket the time of riser formation: an upper surface is the bed of abandoned Lake Lahontan having an age of 15.8 ± 0.6 ka and a lower surface is a fluvial terrace abandoned at 4.7 ± 0.1 ka. We estimate lateral offsets of the risers ranging between 6.6 and 28.3 m (median values), a greater than fourfold difference in values. The amount of offset corresponds to the riser's position relative to modern stream meanders: the smallest offset is in a meander cutbank position, whereas the larger offsets are in straight channel or meander point-bar positions. Taken in isolation, the individual terrace-riser offsets yield slip rates ranging from 0.3 to 7.1 mm/a. However, when the offset values are collectively assessed in a probabilistic framework, we find that a uniform (linear) slip rate of 1.6 mm/a (1.4–1.9 mm/a at 95% confidence) can satisfy the data, within their respective uncertainties. This investigation demonstrates that integrating observations of multiple offset elements (crest, midpoint, and base) from numerous faulted and dated terrace risers at closely spaced sites can refine slip-rate estimates on strike-slip faults.
Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases
Ground Software Maintenance Facility (GSMF) system manual
NASA Technical Reports Server (NTRS)
Derrig, D.; Griffith, G.
1986-01-01
The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
Intelligent fault-tolerant controllers
NASA Technical Reports Server (NTRS)
Huang, Chien Y.
1987-01-01
A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
Fail-safe designs for large capacity battery systems
Kim, Gi-Heon; Smith, Kandler; Ireland, John; Pesaran, Ahmad A.; Neubauer, Jeremy
2016-05-17
Fail-safe systems and design methodologies for large capacity battery systems are disclosed. The disclosed systems and methodologies serve to locate a faulty cell in a large capacity battery, such as a cell having an internal short circuit, determine whether the fault is evolving, and electrically isolate the faulty cell from the rest of the battery, preventing further electrical energy from feeding into the fault.
Testability analysis on a hydraulic system in a certain equipment based on simulation model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou
2018-03-01
Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.
A review of fault tolerant control strategies applied to proton exchange membrane fuel cell systems
NASA Astrophysics Data System (ADS)
Dijoux, Etienne; Steiner, Nadia Yousfi; Benne, Michel; Péra, Marie-Cécile; Pérez, Brigitte Grondin
2017-08-01
Fuel cells are powerful systems for power generation. They have a good efficiency and do not generate greenhouse gases. This technology involves a lot of scientific fields, which leads to the appearance of strongly inter-dependent parameters. This makes the system particularly hard to control and increases fault's occurrence frequency. These two issues call for the necessity to maintain the system performance at the expected level, even in faulty operating conditions. It is called "fault tolerant control" (FTC). The present paper aims to give the state of the art of FTC applied to the proton exchange membrane fuel cell (PEMFC). The FTC approach is composed of two parts. First, a diagnosis part allows the identification and the isolation of a fault; it requires a good a priori knowledge of all the possible faults. Then, a control part allows an optimal control strategy to find the best operating point to recover/mitigate the fault; it requires the knowledge of the degradation phenomena and their mitigation strategies.
Passive fault current limiting device
Evans, Daniel J.; Cha, Yung S.
1999-01-01
A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.
Passive fault current limiting device
Evans, D.J.; Cha, Y.S.
1999-04-06
A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.
Transient Region Coverage in the Propulsion IVHM Technology Experiment
NASA Technical Reports Server (NTRS)
Balaban, Edward; Sweet, Adam; Bajwa, Anupa; Maul, William; Fulton, Chris; Chicatelli, amy
2004-01-01
Over the last several years researchers at NASA Glenn and Ames Research Centers have developed a real-time fault detection and isolation system for propulsion subsystems of future space vehicles. The Propulsion IVHM Technology Experiment (PITEX), as it is called follows the model-based diagnostic methodology and employs Livingstone, developed at NASA Ames, as its reasoning engine. The system has been tested on,flight-like hardware through a series of nominal and fault scenarios. These scenarios have been developed using a highly detailed simulation of the X-34 flight demonstrator main propulsion system and include realistic failures involving valves, regulators, microswitches, and sensors. This paper focuses on one of the recent research and development efforts under PITEX - to provide more complete transient region coverage. It describes the development of the transient monitors, the corresponding modeling methodology, and the interface software responsible for coordinating the flow of information between the quantitative monitors and the qualitative, discrete representation Livingstone.
Rapid changes in the electrical state of the 1999 Izmit earthquake rupture zone
Honkura, Yoshimori; Oshiman, Naoto; Matsushima, Masaki; Barış, Şerif; Kemal Tunçer, Mustafa; Bülent Tank, Sabri; Çelik, Cengiz; Çiftçi, Elif Tolak
2013-01-01
Crustal fluids exist near fault zones, but their relation to the processes that generate earthquakes, including slow-slip events, is unclear. Fault-zone fluids are characterized by low electrical resistivity. Here we investigate the time-dependent crustal resistivity in the rupture area of the 1999 Mw 7.6 Izmit earthquake using electromagnetic data acquired at four sites before and after the earthquake. Most estimates of apparent resistivity in the frequency range of 0.05 to 2.0 Hz show abrupt co-seismic decreases on the order of tens of per cent. Data acquired at two sites 1 month after the Izmit earthquake indicate that the resistivity had already returned to pre-seismic levels. We interpret such changes as the pressure-induced transition between isolated and interconnected fluids. Some data show pre-seismic changes and this suggests that the transition is associated with foreshocks and slow-slip events before large earthquakes. PMID:23820970
Asynchronous spore germination in isogenic natural isolates of Saccharomyces paradoxus.
Stelkens, Rike B; Miller, Eric L; Greig, Duncan
2016-05-01
Spores from wild yeast isolates often show great variation in the size of colonies they produce, for largely unknown reasons. Here we measure the colonies produced from single spores from six different wild Saccharomyces paradoxus strains. We found remarkable variation in spore colony sizes, even among spores that were genetically identical. Different strains had different amounts of variation in spore colony sizes, and variation was not affected by the number of preceding meioses, or by spore maturation time. We used time-lapse photography to show that wild strains also have high variation in spore germination timing, providing a likely mechanism for the variation in spore colony sizes. When some spores from a laboratory strain make small colonies, or no colonies, it usually indicates a genetic or meiotic fault. Here, we demonstrate that in wild strains spore colony size variation is normal. We discuss and assess potential adaptive and non-adaptive explanations for this variation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Do scaly clays control seismicity on faulted shale rocks?
NASA Astrophysics Data System (ADS)
Orellana, Luis Felipe; Scuderi, Marco M.; Collettini, Cristiano; Violay, Marie
2018-04-01
One of the major challenges regarding the disposal of radioactive waste in geological formations is to ensure isolation of radioactive contamination from the environment and the population. Shales are suitable candidates as geological barriers. However, the presence of tectonic faults within clay formations put the long-term safety of geological repositories into question. In this study, we carry out frictional experiments on intact samples of Opalinus Clay, i.e. the host rock for nuclear waste storage in Switzerland. We report experimental evidence suggesting that scaly clays form at low normal stress (≤20 MPa), at sub-seismic velocities (≤300 μm/s) and is related to pre-existing bedding planes with an ongoing process where frictional sliding is the controlling deformation mechanism. We have found that scaly clays show a velocity-weakening and -strengthening behaviour, low frictional strength, and poor re-strengthening over time, conditions required to allow the potential nucleation and propagation of earthquakes within the scaly clays portion of the formation. The strong similarities between the microstructures of natural and experimental scaly clays suggest important implications for the slip behaviour of shallow faults in shales. If natural and anthropogenic perturbations modify the stress conditions of the fault zone, earthquakes might have the potential to nucleate within zones of scaly clays controlling the seismicity of the clay-rich tectonic system, thus, potentially compromising the long-term safeness of geological repositories situated in shales.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
NASA Astrophysics Data System (ADS)
Mattos, Nathalia H.; Alves, Tiago M.; Omosanya, Kamaldeen O.
2016-10-01
This paper uses 2D and high-quality 3D seismic reflection data to assess the geometry and kinematics of the Samson Dome, offshore Norway, revising the implications of the new data to hydrocarbon exploration in the Barents Sea. The study area was divided into three (3) zones in terms of fault geometries and predominant strikes. Displacement-length (D-x) and Throw-depth (T-z) plots showed faults to consist of several segments that were later dip-linked. Interpreted faults were categorised into three families, with Type A comprising crestal faults, Type B representing large E-W faults, and Type C consisting of polygonal faults. The Samson Dome was formed in three major stages: a) a first stage recording buckling of the post-salt overburden and generation of radial faults; b) a second stage involving dissolution and collapse of the dome, causing subsidence of the overburden and linkage of initially isolated fault segments; and c) a final stage in which large fault segments were developed. Late Cretaceous faults strike predominantly to the NW, whereas NE-trending faults comprise Triassic structures that were reactivated in a later stage. Our work provides scarce evidence for the escape of hydrocarbons in the Samson Dome. In addition, fault analyses based on present-day stress distributions indicate a tendency for 'locking' of faults at depth, with the largest leakage factors occurring close to the surface. The Samson Dome is an analogue to salt structures in the Barents Sea where oil and gas exploration has occurred with varied degrees of success.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence.
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-03-09
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-01-01
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults. PMID:28282936
A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy
NASA Astrophysics Data System (ADS)
Paredes, José Matildo; Plazibat, Silvana; Crovetto, Carolina; Stein, Julián; Cayo, Eric; Schiuma, Ariel
2013-10-01
Up to 10% of the liquid hydrocarbons of the Golfo San Jorge basin come from the Mina del Carmen Formation (Albian), an ash-dominated fluvial succession preserved in a variably integrated channel network that evolved coeval to an extensional tectonic event, poorly analyzed up to date. Fault orientation, throw distribution and kinematics of fault populations affecting the Mina del Carmen Formation were investigated using a 3D seismic dataset in the Cerro Dragón field (Eastern Sector of the Golfo San Jorge basin). Thickness maps of the seismic sub-units that integrate the Mina del Carmen Formation, named MEC-A-MEC-C in ascending order, and mapping of fluvial channels performed applying geophysical tools of visualization were integrated to the kinematical analysis of 20 main normal faults of the field. The study provides examples of changes in fault throw patterns with time, associated with faults of different orientations. The "main synrift phase" is characterized by NE-SW striking (mean Az = 49°), basement-involved normal faults that attains its maximum throw on top of the volcanic basement; this set of faults was active during deposition of the Las Heras Group and Pozo D-129 formation. A "second synrift phase" is recognized by E-W striking normal faults (mean Az = 91°) that nucleated and propagated from the Albian Mina del Carmen Formation. Fault activity was localized during deposition of the MEC-A sub-unit, but generalized during deposition of MEC-B sub-unit, producing centripetal and partially isolated depocenters. Upward decreasing in fault activity is inferred by more gradual thickness variation of MEC-C and the overlying Lower Member of Bajo Barreal Formation, evidencing passive infilling of relief associated to fault boundaries, and conformation of wider depocenters with well integrated networks of channels of larger dimensions but random orientation. Lately, the Mina del Carmen Formation was affected by the downward propagation of E-W to ESE-WNW striking normal faults (mean Az = 98°) formed during the "third rifting phase", which occurs coeval with the deposition of the Upper Member of the Bajo Barreal Formation. The fault characteristics indicate a counterclockwise rotation of the stress field during the deposition of the Chubut Group of the Golfo San Jorge basin, likely associated to the rotation of Southern South America during the fragmentation of the Gondwana paleocontinent. Understanding the evolution of fault-controlled topography in continental basins allow to infer location and orientation of coeval fluvial systems, providing a more reliable scenario for location of producing oil wells.
San Andreas tremor cascades define deep fault zone complexity
Shelly, David R.
2015-01-01
Weak seismic vibrations - tectonic tremor - can be used to delineate some plate boundary faults. Tremor on the deep San Andreas Fault, located at the boundary between the Pacific and North American plates, is thought to be a passive indicator of slow fault slip. San Andreas Fault tremor migrates at up to 30 m s-1, but the processes regulating tremor migration are unclear. Here I use a 12-year catalogue of more than 850,000 low-frequency earthquakes to systematically analyse the high-speed migration of tremor along the San Andreas Fault. I find that tremor migrates most effectively through regions of greatest tremor production and does not propagate through regions with gaps in tremor production. I interpret the rapid tremor migration as a self-regulating cascade of seismic ruptures along the fault, which implies that tremor may be an active, rather than passive participant in the slip propagation. I also identify an isolated group of tremor sources that are offset eastwards beneath the San Andreas Fault, possibly indicative of the interface between the Monterey Microplate, a hypothesized remnant of the subducted Farallon Plate, and the North American Plate. These observations illustrate a possible link between the central San Andreas Fault and tremor-producing subduction zones.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Flight test results of the strapdown inertial reference unit (SIRU) navigation system are presented. The fault-tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance.
Aircraft applications of fault detection and isolation techniques
NASA Astrophysics Data System (ADS)
Marcos Esteban, Andres
In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.
Fault detection and identification in missile system guidance and control: a filtering approach
NASA Astrophysics Data System (ADS)
Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.
1996-03-01
Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.
Advanced Protection & Service Restoration for FREEDM Systems
NASA Astrophysics Data System (ADS)
Singh, Urvir
A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
Geology of the southwestern Pasco Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-09-01
The objective of this study was to define those aspects of the stratigraphic, structural, and tectonic setting which are important to the integrity of a deep-mined waste-isolation cavern in the Columbia River basalts. Three principal structural features received the focus of the field effort in the 1,485-square-kilometer area. These are the northern end of the Horse Heaven uplift, the linear ridges of the Badger Mountain-Red Mountain trend, and the Rattlesnake uplift. The thickest sequence of basalt exposed in the study area is on the steep, northeastern slope of Rattlesnake Mountain; about 485 meters of stratigraphic section can be examined inmore » the field area. Subsidence and weak deformation of the southwestern Pasco Basin area during Yakima time can be recognized in the disposition of flows and interbeds. In the southwestern Pasco Basin, most of the topographically expressed basalt bedrock mountains, ridges, hills, and knolls have developed since spreading of the Saddle Mountains flows. Deformation since Ice Harbor time (about 8 million years ago) has been by folding, faulting, and in some structures, by a combination of both. The doubly plunging anticlinal folds of Badger Mountain, Red Mountain, and easternmost Rattlesnake Hills have vertical structural amplitudes in the 80 to 200-meter range. The high-angle, possibly reverse Badger Mountain fault has offset up to 60 meters; offset is downward on the northeast. Rattlesnake Mountain is, in part, a tilted fault-block structure. The western end of the Rattlesnake uplift, Rattlesnake Hills, is principally a broad anticline with numerous minor folds and faults. Geomorphic relations suggest that the post-Ice Harbor structural movement in the study area is of one episode. 65 figures, 8 tables.« less
A Voyager attitude control perspective on fault tolerant systems
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.; Litty, E. C.
1981-01-01
In current spacecraft design, a trend can be observed to achieve greater fault tolerance through the application of on-board software dedicated to detecting and isolating failures. Whether fault tolerance through software can meet the desired objectives depends on very careful consideration and control of the system in which the software is imbedded. The considered investigation has the objective to provide some of the insight needed for the required analysis of the system. A description is given of the techniques which have been developed in this connection during the development of the Voyager spacecraft. The Voyager Galileo Attitude and Articulation Control Subsystem (AACS) fault tolerant design is discussed to emphasize basic lessons learned from this experience. The central driver of hardware redundancy implementation on Voyager was known as the 'single point failure criterion'.
NASA Astrophysics Data System (ADS)
Misawa, A.; Arai, K.; Fujiwara, T.; Sato, M.; Shin'ichiro, Y.; Hirata, K.; Kanamatsu, T.
2017-12-01
On the forearc slope of the Japan Trench is a typical subsidence region associated with the subduction erosion in the Japan Trench. Arai et al. (2014) reported the existence of the isolated basins with widths of up to several tens of kilometers using the seismic profiles that acquired before the 2011 Tohoku earthquake (Mw 9.0) in the forearc slope. The isolated basin probably formed due to subsidence accompanying the regional activity of normal fault systems in the forearc slope. Arai et al. (2014) suggested that the geological structures of the forearc slope along the Japan Trench are typical of those resulting from subduction erosion and proposed that the episodic subsidence accompanied by normal faulting is the most recent deformation. During the 2011 large earthquake, seafloor on the landward slope of the Japan Trench moved 50 m east-southeast toward trench (Fujiwara et al., 2011). In addition, aftershock activity after the 2011 large earthquake have predominated in the activity of the normal fault system. Therefore, there have a possibility that new isolated basin is formed after the 2011 large earthquake in the forearc slope of the Japan Trench. In order to capture the structural change in the isolated basins, we compared the seismic profiles acquired before (Multi-Channel Seismic (MCS) data acquired with KR07-05 cruise) and after (Single-Channel Seismic (SCS) data acquired with NT15-07 cruise) the 2011 large earthquake. However, the large-scale structural changes are not identified around the isolated basin. In order to capture the small-scale structural change in the shallow part of the isolated basins using high-resolution data, we make an attempt at the marine geological and geophysical survey in the offshore Tohoku region using R/V Shinsei-Maru of JAMSTEC (KS-17-8 cruise) in August 2017. In this cruise, we plan to carry out the following surveys; (1) swath bathymetric survey, (2) high-resolution parametric subbottom profiler (SBP) survey, (3) geomagnetic survey. In this presentation, we will show the latest results about the shallow structure of the isolated basin in the forearc slope.
Using shadow page cache to improve isolated drivers performance.
Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.
Using Shadow Page Cache to Improve Isolated Drivers Performance
Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373
Delivery and application of precise timing for a traveling wave powerline fault locator system
NASA Technical Reports Server (NTRS)
Street, Michael A.
1990-01-01
The Bonneville Power Administration (BPA) has successfully operated an in-house developed powerline fault locator system since 1986. The BPA fault locator system consists of remotes installed at cardinal power transmission line system nodes and a central master which polls the remotes for traveling wave time-of-arrival data. A power line fault produces a fast rise-time traveling wave which emanates from the fault point and propagates throughout the power grid. The remotes time-tag the traveling wave leading edge as it passes through the power system cardinal substation nodes. A synchronizing pulse transmitted via the BPA analog microwave system on a wideband channel sychronizes the time-tagging counters in the remote units to a different accuracy of better than one microsecond. The remote units correct the raw time tags for synchronizing pulse propagation delay and return these corrected values to the fault locator master. The master then calculates the power system disturbance source using the collected time tags. The system design objective is a fault location accuracy of 300 meters. BPA's fault locator system operation, error producing phenomena, and method of distributing precise timing are described.
Health management and controls for earth to orbit propulsion systems
NASA Technical Reports Server (NTRS)
Bickford, R. L.
1992-01-01
Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.
Network Connectivity for Permanent, Transient, Independent, and Correlated Faults
NASA Technical Reports Server (NTRS)
White, Allan L.; Sicher, Courtney; henry, Courtney
2012-01-01
This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.
Flight test results of the strapdown hexad inertial reference unit (SIRU). Volume 2: Test report
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Results of flight tests of the Strapdown Inertial Reference Unit (SIRU) navigation system are presented. The fault tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance. Performance shortcomings are analyzed.
Evolution of wear and friction along experimental faults
Boneh, Yeval; Chang, Jefferson C.; Lockner, David A.; Reches, Zeev
2014-01-01
We investigate the evolution of wear and friction along experimental faults composed of solid rock blocks. This evolution is analyzed through shear experiments along five rock types, and the experiments were conducted in a rotary apparatus at slip velocities of 0.002–0.97 m/s, slip distances from a few millimeters to tens of meters, and normal stress of 0.25–6.9 MPa. The wear and friction measurements and fault surface observations revealed three evolution phases: A) An initial stage (slip distances <50 mm) of wear by failure of isolated asperities associated with roughening of the fault surface; B) a running-in stage of slip distances of 1–3 m with intense wear-rate, failure of many asperities, and simultaneous reduction of the friction coefficient and wear-rate; and C) a steady-state stage that initiates when the fault surface is covered by a gouge layer, and during which both wear-rate and friction coefficient maintain quasi-constant, low levels. While these evolution stages are clearly recognizable for experimental faults made from bare rock blocks, our analysis suggests that natural faults “bypass” the first two stages and slip at gouge-controlled steady-state conditions.
NASA Astrophysics Data System (ADS)
Mazza, Fabio
2017-08-01
The curved surface sliding (CSS) system is one of the most in-demand techniques for the seismic isolation of buildings; yet there are still important aspects of its behaviour that need further attention. The CSS system presents variation of friction coefficient, depending on the sliding velocity of the CSS bearings, while friction force and lateral stiffness during the sliding phase are proportional to the axial load. Lateral-torsional response needs to be better understood for base-isolated structures located in near-fault areas, where fling-step and forward-directivity effects can produce long-period (horizontal) velocity pulses. To analyse these aspects, a six-storey reinforced concrete (r.c.) office framed building, with an L-shaped plan and setbacks in elevation, is designed assuming three values of the radius of curvature for the CSS system. Seven in-plan distributions of dynamic-fast friction coefficient for the CSS bearings, ranging from a constant value for all isolators to a different value for each, are considered in the case of low- and medium-type friction properties. The seismic analysis of the test structures is carried out considering an elastic-linear behaviour of the superstructure, while a nonlinear force-displacement law of the CSS bearings is considered in the horizontal direction, depending on sliding velocity and axial load. Given the lack of knowledge of the horizontal direction at which near-fault ground motions occur, the maximum torsional effects and residual displacements are evaluated with reference to different incidence angles, while the orientation of the strongest observed pulses is considered to obtain average values.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Escobar, R F; Astorga-Zaragoza, C M; Téllez-Anguiano, A C; Juárez-Romero, D; Hernández, J A; Guerrero-Ramírez, G V
2011-07-01
This paper deals with fault detection and isolation (FDI) in sensors applied to a concentric-pipe counter-flow heat exchanger. The proposed FDI is based on the analytical redundancy implementing nonlinear high-gain observers which are used to generate residuals when a sensor fault is presented (as software sensors). By evaluating the generated residual, it is possible to switch between the sensor and the observer when a failure is detected. Experiments in a heat exchanger pilot validate the effectiveness of the approach. The FDI technique is easy to implement allowing the industries to have an excellent alternative tool to keep their heat transfer process under supervision. The main contribution of this work is based on a dynamic model with heat transfer coefficients which depend on temperature and flow used to estimate the output temperatures of a heat exchanger. This model provides a satisfactory approximation of the states of the heat exchanger in order to allow its implementation in a FDI system used to perform supervision tasks. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon
2009-01-01
Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
Code of Federal Regulations, 2013 CFR
2013-10-01
... reasonable to foresee fault currents or an unusual risk of lightning, you must protect the pipeline against... metallic structures, unless you electrically interconnect and cathodically protect the pipeline and the... isolation of a portion of a pipeline is necessary to facilitate the application of corrosion control. (c...
Code of Federal Regulations, 2014 CFR
2014-10-01
... reasonable to foresee fault currents or an unusual risk of lightning, you must protect the pipeline against... metallic structures, unless you electrically interconnect and cathodically protect the pipeline and the... isolation of a portion of a pipeline is necessary to facilitate the application of corrosion control. (c...
Code of Federal Regulations, 2012 CFR
2012-10-01
... reasonable to foresee fault currents or an unusual risk of lightning, you must protect the pipeline against... metallic structures, unless you electrically interconnect and cathodically protect the pipeline and the... isolation of a portion of a pipeline is necessary to facilitate the application of corrosion control. (c...
NASA Astrophysics Data System (ADS)
Huang, Huan; Baddour, Natalie; Liang, Ming
2018-02-01
Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
NASA Astrophysics Data System (ADS)
Mbaya, Timmy
Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.
NASA Astrophysics Data System (ADS)
Bozionelos, George; Galea, Pauline; D'Amico, Sebastiano; Agius, Matthew
2017-04-01
The tectonic setting of the Maltese islands is mainly influenced by two dominant rift systems belonging to different ages and having different trends. The first and older rift created the horst and graben structure in northern Malta. The second rift generation, in the south, including the Maghlaq Fault, is associated with the Pantelleria Rift. The Maghlaq Fault is a spectacular NW - SE trending and left-stepping normal fault running along the southern coastline of the Maltese islands, cutting the Oligo-Miocene pre to syn-rift carbonates. Its surface expression is traceable along 4 km of the coastline, where vertical displacements of the island's Tertiary stratigraphic sequence are clearly visible and exceed 210m. These displacements have given rise to sheer, slickensided fault scarps, as well as isolating the small island of Filfla 4km offshore the southern coast. Identification and assessment of the seismic activity related with Maghlaq fault, for the recent years, is performed, re-evaluating and redetermining the hypocentral locations and the source parameters of both recent and older events. The earthquakes that have affected the Maltese islands in the historical past, have occurred mainly at the Sicily Channel, at eastern Sicily, even as far away as the Hellenic arc. Some of these earthquakes also have caused considerable damage to buildings. The Maghlaq fault is believed to be one of the master faults of the Sicily Channel Rift, being parallel to the Malta graben, which passes around 20km south of Malta and shows continuous seismic activity. Despite the relationship of this fault with the graben system, no seismic activity on the Maghlaq fault had been documented previous to 2015. On the July 30nth 2015, an earthquake was widely felt in the southern half of Malta and was approximately located just offshore the southern coast. Since then, a swarm of seismic events lasting several days, as well as other isolated events have occurred, indicating the fault to be seismically active. Investigation of the nature of the seismic events and other previous activity that may have been misclassified due to poor location capability, is performed. Such results are of utmost importance in order to reveal the implication of this newly-discovered activity on the seismic hazard to the Maltese islands and also to improve understanding of the local geodynamics, highlighting the mechanisms that contribute to both the crustal deformation and the tectonics of the upper crust. The investigation is carried out using the stations of the recently extended Malta Seismic Network and regional stations. The results are evaluated in the context of the role of the Maghlaq fault in the extensional tectonics associated with the Sicily Channel Rift and the African continental margin.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel, C.G.; Karlstrom, K.E.
1993-04-01
Distinctive lithostratigraphic markers, metamorphic isobaric surfaces, major ductile thrusts and overturned folds in Early Proterozoic rocks from 4 isolated uplifts in north-central NM provide relatively firm piercing points for restoration of over 50 km of right lateral strike-slip movement along a network of N-S trending faults. In addition, the authors speculate that the Uncompahgre Group in the Needle Mts. of southern Colorado is correlative with the Hondo Group in northern NM; suggesting over 150 km of right-lateral strike slip offset has occurred across a network of N-S trending faults that includes the Picuris-Pecos fault, the Borrego fault, the Nacimiento faultmore » and others. The tectonic implications of this reconstruction span geologic time from the Proterozoic to the Cenozoic. The restoration of slip provides new insights into the structure of the Proterozoic basement in NM. Volcanogenic basement (1.74--1.72 Ga) and overlying sedimentary cover (Hondo Group) are imbricated in an originally EW- to NW-trending ductile foreland thrust and fold belt that formed near the southern margin of 1.74--1.72 basement. The authors propose that the volcanogenic basement rocks correlate with rocks of the Yavapi Province in Arizona and that the Hondo Group correlates with foreland rocks of the Tonto Basin Supergroup. Rocks south of this belt are 1.65 Ga or younger and are interpreted to belong to a separate crustal province which correlates with the Mazatzal Province in Arizona. Proterozoic ductile fault geometries suggest that the Mazatzal Province was thrust northward and resulted in imbrication of Yavapi Province basement and its siliciclastic over sequence.« less
An alternative hypothesis for the mid-Paleozoic Antler orogeny in Nevada
Ketner, Keith B.
2012-01-01
A great volume of Mississippian orogenic deposits supports the concept of a mid-Paleozoic orogeny in Nevada, and the existence and timing of that event are not questioned here. The nature of the orogeny is problematic, however, and new ideas are called for. The cause of the Antler orogeny, long ascribed to plate convergence, is here attributed to left-lateral north-south strike-slip faulting in northwestern Nevada. The stratigraphic evidence originally provided in support of an associated regional thrust fault, the Roberts Mountains thrust, is now known to be invalid, and abundant, detailed map evidence testifies to post-Antler ages of virtually all large folds and thrust faults in the region. The Antler orogeny was not characterized by obduction of the Roberts Mountains allochthon; rocks composing the "allochthon" essentially were deposited in situ. Instead, the orogeny was characterized by appearance of an elongate north-northeast-trending uplift through central Nevada and by two parallel flanking depressions. The eastern depression was the Antler foreland trough, into which sediments flowed from both east and west in the Mississippian. The western depression was the Antler hinterland trough into which sediments also flowed from both east and west during the Mississippian. West of the hinterland trough, across a left-lateral strike-slip fault, an exotic landmass originally attached to the northwestern part of the North American continent was moved southward 1700 km along a strike-slip fault. An array of isolated blocks of shelf carbonate rocks, long thought to be autochthonous exposures in windows of the Roberts Mountains allochthon, is proposed here as an array of gravity-driven slide blocks dislodged from the shelf, probably initiated by the Late Devonian Alamo impact event.
Expert systems for real-time monitoring and fault diagnosis
NASA Technical Reports Server (NTRS)
Edwards, S. J.; Caglayan, A. K.
1989-01-01
Methods for building real-time onboard expert systems were investigated, and the use of expert systems technology was demonstrated in improving the performance of current real-time onboard monitoring and fault diagnosis applications. The potential applications of the proposed research include an expert system environment allowing the integration of expert systems into conventional time-critical application solutions, a grammar for describing the discrete event behavior of monitoring and fault diagnosis systems, and their applications to new real-time hardware fault diagnosis and monitoring systems for aircraft.
Identifiability of Additive, Time-Varying Actuator and Sensor Faults by State Augmentation
NASA Technical Reports Server (NTRS)
Upchurch, Jason M.; Gonzalez, Oscar R.; Joshi, Suresh M.
2014-01-01
Recent work has provided a set of necessary and sucient conditions for identifiability of additive step faults (e.g., lock-in-place actuator faults, constant bias in the sensors) using state augmentation. This paper extends these results to an important class of faults which may affect linear, time-invariant systems. In particular, the faults under consideration are those which vary with time and affect the system dynamics additively. Such faults may manifest themselves in aircraft as, for example, control surface oscillations, control surface runaway, and sensor drift. The set of necessary and sucient conditions presented in this paper are general, and apply when a class of time-varying faults affects arbitrary combinations of actuators and sensors. The results in the main theorems are illustrated by two case studies, which provide some insight into how the conditions may be used to check the theoretical identifiability of fault configurations of interest for a given system. It is shown that while state augmentation can be used to identify certain fault configurations, other fault configurations are theoretically impossible to identify using state augmentation, giving practitioners valuable insight into such situations. That is, the limitations of state augmentation for a given system and configuration of faults are made explicit. Another limitation of model-based methods is that there can be large numbers of fault configurations, thus making identification of all possible configurations impractical. However, the theoretical identifiability of known, credible fault configurations can be tested using the theorems presented in this paper, which can then assist the efforts of fault identification practitioners.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
NASA Technical Reports Server (NTRS)
Durkin, John; Schlegelmilch, Richard; Tallo, Donald
1992-01-01
LeRC has recently completed the design of a Ka-band satellite transponder system, as part of the Advanced Communication Technology Satellite (ACTS) System. To enhance the reliability of this satellite, NASA funded the University of Akron to explore the application of an expert system to provide the transponder with an autonomous diagnosis capability. The results of this research was the development of a prototype diagnosis expert system called FIDEX (fault-isolation and diagnosis expert). FIDEX is a frame-based expert system that was developed in the NEXPERT Object development environment by Neuron Data, Inc. It is a MicroSoft Windows version 3.0 application, and was designed to operate on an Intel i80386 based personal computer system.
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
NASA Astrophysics Data System (ADS)
Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling
2017-05-01
To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.
FDI and Accommodation Using NN Based Techniques
NASA Astrophysics Data System (ADS)
Garcia, Ramon Ferreiro; de Miguel Catoira, Alberto; Sanz, Beatriz Ferreiro
Massive application of dynamic backpropagation neural networks is used on closed loop control FDI (fault detection and isolation) tasks. The process dynamics is mapped by means of a trained backpropagation NN to be applied on residual generation. Process supervision is then applied to discriminate faults on process sensors, and process plant parameters. A rule based expert system is used to implement the decision making task and the corresponding solution in terms of faults accommodation and/or reconfiguration. Results show an efficient and robust FDI system which could be used as the core of an SCADA or alternatively as a complement supervision tool operating in parallel with the SCADA when applied on a heat exchanger.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
NASA Astrophysics Data System (ADS)
Courgeon, S.; Jorry, S. J.; Jouet, G.; Camoin, G.; BouDagher-Fadel, M. K.; Bachèlery, P.; Caline, B.; Boichard, R.; Révillon, S.; Thomas, Y.; Thereau, E.; Guérin, C.
2017-06-01
Understanding the impact of tectonic activity and volcanism on long-term (i.e. millions years) evolution of shallow-water carbonate platforms represents a major issue for both industrial and academic perspectives. The southern central Mozambique Channel is characterized by a 100 km-long volcanic ridge hosting two guyots (the Hall and Jaguar banks) and a modern atoll (Bassas da India) fringed by a large terrace. Dredge sampling, geophysical acquisitions and submarines videos carried out during recent oceanographic cruises revealed that submarine flat-top seamounts correspond to karstified and drowned shallow-water carbonate platforms largely covered by volcanic material and structured by a dense network of normal faults. Microfacies and well-constrained stratigraphic data indicate that these carbonate platforms developed in shallow-water tropical environments during Miocene times and were characterized by biological assemblages dominated by corals, larger benthic foraminifera, red and green algae. The drowning of these isolated carbonate platforms is revealed by the deposition of outer shelf sediments during the Early Pliocene and seems closely linked to (1) volcanic activity typified by the establishment of wide lava flow complexes, and (2) to extensional tectonic deformation associated with high-offset normal faults dividing the flat-top seamounts into distinctive structural blocks. Explosive volcanic activity also affected platform carbonates and was responsible for the formation of crater(s) and the deposition of tuff layers including carbonate fragments. Shallow-water carbonate sedimentation resumed during Late Neogene time with the colonization of topographic highs inherited from tectonic deformation and volcanic accretion. Latest carbonate developments ultimately led to the formation of the Bassas da India modern atoll. The geological history of isolated carbonate platforms from the southern Mozambique Channel represents a new case illustrating the major impact of tectonic and volcanic activity on the long-term evolution of shallow-water carbonate platforms.
Hardware fault insertion and instrumentation system: Mechanization and validation
NASA Technical Reports Server (NTRS)
Benson, J. W.
1987-01-01
Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. Capabilities include anomaly detection, fault isolation, prognostics and physics-based diagnostics.
The SSM/PMAD automated test bed project
NASA Technical Reports Server (NTRS)
Lollar, Louis F.
1991-01-01
The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).
Calculating flux to predict future cave radon concentrations.
Rowberry, Matt D; Martí, Xavi; Frontera, Carlos; Van De Wiel, Marco J; Briestenský, Miloš
2016-06-01
Cave radon concentration measurements reflect the outcome of a perpetual competition which pitches flux against ventilation and radioactive decay. The mass balance equations used to model changes in radon concentration through time routinely treat flux as a constant. This mathematical simplification is acceptable as a first order approximation despite the fact that it sidesteps an intrinsic geological problem: the majority of radon entering a cavity is exhaled as a result of advection along crustal discontinuities whose motions are inhomogeneous in both time and space. In this paper the dynamic nature of flux is investigated and the results are used to predict cave radon concentration for successive iterations. The first part of our numerical modelling procedure focuses on calculating cave air flow velocity while the second part isolates flux in a mass balance equation to simulate real time dependence among the variables. It is then possible to use this information to deliver an expression for computing cave radon concentration for successive iterations. The dynamic variables in the numerical model are represented by the outer temperature, the inner temperature, and the radon concentration while the static variables are represented by the radioactive decay constant and a range of parameters related to geometry of the cavity. Input data were recorded at Driny Cave in the Little Carpathians Mountains of western Slovakia. Here the cave passages have developed along splays of the NE-SW striking Smolenice Fault and a series of transverse faults striking NW-SE. Independent experimental observations of fault slip are provided by three permanently installed mechanical extensometers. Our numerical modelling has revealed four important flux anomalies between January 2010 and August 2011. Each of these flux anomalies was preceded by conspicuous fault slip anomalies. The mathematical procedure outlined in this paper will help to improve our understanding of radon migration along crustal discontinuities and its subsequent exhalation into the atmosphere. Furthermore, as it is possible to supply the model with continuous data, future research will focus on establishing a series of underground monitoring sites with the aim of generating the first real time global radon flux maps. Copyright © 2016 Elsevier Ltd. All rights reserved.
Real-time fault diagnosis for propulsion systems
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Guo, Ten-Huei; Delaat, John C.; Duyar, Ahmet
1991-01-01
Current research toward real time fault diagnosis for propulsion systems at NASA-Lewis is described. The research is being applied to both air breathing and rocket propulsion systems. Topics include fault detection methods including neural networks, system modeling, and real time implementations.
Dynamical Instability Produces Transform Faults at Mid-Ocean Ridges
NASA Astrophysics Data System (ADS)
Gerya, Taras
2010-08-01
Transform faults at mid-ocean ridges—one of the most striking, yet enigmatic features of terrestrial plate tectonics—are considered to be the inherited product of preexisting fault structures. Ridge offsets along these faults therefore should remain constant with time. Here, numerical models suggest that transform faults are actively developing and result from dynamical instability of constructive plate boundaries, irrespective of previous structure. Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults within a few million years. Fracture-related rheological weakening stabilizes ridge-parallel detachment faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps.
NASA Technical Reports Server (NTRS)
Ferrell, Bob A.; Lewis, Mark E.; Perotti, Jose M.; Brown, Barbara L.; Oostdyk, Rebecca L.; Goetz, Jesse W.
2010-01-01
This paper's main purpose is to detail issues and lessons learned regarding designing, integrating, and implementing Fault Detection Isolation and Recovery (FDIR) for Constellation Exploration Program (CxP) Ground Operations at Kennedy Space Center (KSC). Part of the0 overall implementation of National Aeronautics and Space Administration's (NASA's) CxP, FDIR is being implemented in three main components of the program (Ares, Orion, and Ground Operations/Processing). While not initially part of the design baseline for the CxP Ground Operations, NASA felt that FDIR is important enough to develop, that NASA's Exploration Systems Mission Directorate's (ESMD's) Exploration Technology Development Program (ETDP) initiated a task for it under their Integrated System Health Management (ISHM) research area. This task, referred to as the FDIIR project, is a multi-year multi-center effort. The primary purpose of the FDIR project is to develop a prototype and pathway upon which Fault Detection and Isolation (FDI) may be transitioned into the Ground Operations baseline. Currently, Qualtech Systems Inc (QSI) Commercial Off The Shelf (COTS) software products Testability Engineering and Maintenance System (TEAMS) Designer and TEAMS RDS/RT are being utilized in the implementation of FDI within the FDIR project. The TEAMS Designer COTS software product is being utilized to model the system with Functional Fault Models (FFMs). A limited set of systems in Ground Operations are being modeled by the FDIR project, and the entire Ares Launch Vehicle is being modeled under the Functional Fault Analysis (FFA) project at Marshall Space Flight Center (MSFC). Integration of the Ares FFMs and the Ground Processing FFMs is being done under the FDIR project also utilizing the TEAMS Designer COTS software product. One of the most significant challenges related to integration is to ensure that FFMs developed by different organizations can be integrated easily and without errors. Software Interface Control Documents (ICDs) for the FFMs and their usage will be addressed as the solution to this issue. In particular, the advantages and disadvantages of these ICDs across physically separate development groups will be delineated.
Software Testbed for Developing and Evaluating Integrated Autonomous Subsystems
NASA Technical Reports Server (NTRS)
Ong, James; Remolina, Emilio; Prompt, Axel; Robinson, Peter; Sweet, Adam; Nishikawa, David
2015-01-01
To implement fault tolerant autonomy in future space systems, it will be necessary to integrate planning, adaptive control, and state estimation subsystems. However, integrating these subsystems is difficult, time-consuming, and error-prone. This paper describes Intelliface/ADAPT, a software testbed that helps researchers develop and test alternative strategies for integrating planning, execution, and diagnosis subsystems more quickly and easily. The testbed's architecture, graphical data displays, and implementations of the integrated subsystems support easy plug and play of alternate components to support research and development in fault-tolerant control of autonomous vehicles and operations support systems. Intelliface/ADAPT controls NASA's Advanced Diagnostics and Prognostics Testbed (ADAPT), which comprises batteries, electrical loads (fans, pumps, and lights), relays, circuit breakers, invertors, and sensors. During plan execution, an experimentor can inject faults into the ADAPT testbed by tripping circuit breakers, changing fan speed settings, and closing valves to restrict fluid flow. The diagnostic subsystem, based on NASA's Hybrid Diagnosis Engine (HyDE), detects and isolates these faults to determine the new state of the plant, ADAPT. Intelliface/ADAPT then updates its model of the ADAPT system's resources and determines whether the current plan can be executed using the reduced resources. If not, the planning subsystem generates a new plan that reschedules tasks, reconfigures ADAPT, and reassigns the use of ADAPT resources as needed to work around the fault. The resource model, planning domain model, and planning goals are expressed using NASA's Action Notation Modeling Language (ANML). Parts of the ANML model are generated automatically, and other parts are constructed by hand using the Planning Model Integrated Development Environment, a visual Eclipse-based IDE that accelerates ANML model development. Because native ANML planners are currently under development and not yet sufficiently capable, the ANML model is translated into the New Domain Definition Language (NDDL) and sent to NASA's EUROPA planning system for plan generation. The adaptive controller executes the new plan, using augmented, hierarchical finite state machines to select and sequence actions based on the state of the ADAPT system. Real-time sensor data, commands, and plans are displayed in information-dense arrays of timelines and graphs that zoom and scroll in unison. A dynamic schematic display uses color to show the real-time fault state and utilization of the system components and resources. An execution manager coordinates the activities of the other subsystems. The subsystems are integrated using the Internet Communications Engine (ICE). an object-oriented toolkit for building distributed applications.
Fault detection in rotor bearing systems using time frequency techniques
NASA Astrophysics Data System (ADS)
Chandra, N. Harish; Sekhar, A. S.
2016-05-01
Faults such as misalignment, rotor cracks and rotor to stator rub can exist collectively in rotor bearing systems. It is an important task for rotor dynamic personnel to monitor and detect faults in rotating machinery. In this paper, the rotor startup vibrations are utilized to solve the fault identification problem using time frequency techniques. Numerical simulations are performed through finite element analysis of the rotor bearing system with individual and collective combinations of faults as mentioned above. Three signal processing tools namely Short Time Fourier Transform (STFT), Continuous Wavelet Transform (CWT) and Hilbert Huang Transform (HHT) are compared to evaluate their detection performance. The effect of addition of Signal to Noise ratio (SNR) on three time frequency techniques is presented. The comparative study is focused towards detecting the least possible level of the fault induced and the computational time consumed. The computation time consumed by HHT is very less when compared to CWT based diagnosis. However, for noisy data CWT is more preferred over HHT. To identify fault characteristics using wavelets a procedure to adjust resolution of the mother wavelet is presented in detail. Experiments are conducted to obtain the run-up data of a rotor bearing setup for diagnosis of shaft misalignment and rotor stator rubbing faults.
Jiang, Ye; Hu, Qinglei; Ma, Guangfu
2010-01-01
In this paper, a robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheel/actuator failures, persistent bounded disturbances and unknown inertia parameter uncertainties. The controller is designed based on an adaptive backstepping sliding mode control scheme, and a sufficient condition under which this control law can render the system semi-globally input-to-state stable is also provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. Moreover, in the design, the control law does not need a fault detection and isolation mechanism even if the failure time instants, patterns and values on actuator failures are also unknown for the designers, as motivated from a practical spacecraft control application. In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, illustrative simulation results of an application to flexible spacecraft show that high precise attitude control and vibration suppression are successfully achieved using various scenarios of controlling effective failures. 2009. Published by Elsevier Ltd.
Volcanic passive margins: another way to break up continents
Geoffroy, L.; Burov, E. B.; Werner, P.
2015-01-01
Two major types of passive margins are recognized, i.e. volcanic and non-volcanic, without proposing distinctive mechanisms for their formation. Volcanic passive margins are associated with the extrusion and intrusion of large volumes of magma, predominantly mafic, and represent distinctive features of Larges Igneous Provinces, in which regional fissural volcanism predates localized syn-magmatic break-up of the lithosphere. In contrast with non-volcanic margins, continentward-dipping detachment faults accommodate crustal necking at both conjugate volcanic margins. These faults root on a two-layer deformed ductile crust that appears to be partly of igneous nature. This lower crust is exhumed up to the bottom of the syn-extension extrusives at the outer parts of the margin. Our numerical modelling suggests that strengthening of deep continental crust during early magmatic stages provokes a divergent flow of the ductile lithosphere away from a central continental block, which becomes thinner with time due to the flow-induced mechanical erosion acting at its base. Crustal-scale faults dipping continentward are rooted over this flowing material, thus isolating micro-continents within the future oceanic domain. Pure-shear type deformation affects the bulk lithosphere at VPMs until continental breakup, and the geometry of the margin is closely related to the dynamics of an active and melting mantle. PMID:26442807
Volcanic passive margins: another way to break up continents.
Geoffroy, L; Burov, E B; Werner, P
2015-10-07
Two major types of passive margins are recognized, i.e. volcanic and non-volcanic, without proposing distinctive mechanisms for their formation. Volcanic passive margins are associated with the extrusion and intrusion of large volumes of magma, predominantly mafic, and represent distinctive features of Larges Igneous Provinces, in which regional fissural volcanism predates localized syn-magmatic break-up of the lithosphere. In contrast with non-volcanic margins, continentward-dipping detachment faults accommodate crustal necking at both conjugate volcanic margins. These faults root on a two-layer deformed ductile crust that appears to be partly of igneous nature. This lower crust is exhumed up to the bottom of the syn-extension extrusives at the outer parts of the margin. Our numerical modelling suggests that strengthening of deep continental crust during early magmatic stages provokes a divergent flow of the ductile lithosphere away from a central continental block, which becomes thinner with time due to the flow-induced mechanical erosion acting at its base. Crustal-scale faults dipping continentward are rooted over this flowing material, thus isolating micro-continents within the future oceanic domain. Pure-shear type deformation affects the bulk lithosphere at VPMs until continental breakup, and the geometry of the margin is closely related to the dynamics of an active and melting mantle.
NASA Technical Reports Server (NTRS)
Toms, David; Hadden, George D.; Harrington, Jim
1990-01-01
The Maintenance and Diagnostic System (MDS) that is being developed at Honeywell to enhance the Fault Detection Isolation and Recovery system (FDIR) for the Attitude Determination and Control System on Space Station Freedom is described. The MDS demonstrates ways that AI-based techniques can be used to improve the maintainability and safety of the Station by helping to resolve fault anomalies that cannot be fully determined by built-in-test, by providing predictive maintenance capabilities, and by providing expert maintenance assistance. The MDS will address the problems associated with reasoning about dynamic, continuous information versus only about static data, the concerns of porting software based on AI techniques to embedded targets, and the difficulties associated with real-time response. An initial prototype was built of the MDS. The prototype executes on Sun and IBM PS/2 hardware and is implemented in the Common Lisp; further work will evaluate its functionality and develop mechanisms to port the code to Ada.
Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data
NASA Astrophysics Data System (ADS)
Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted
2013-01-01
Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.
Measurement and analysis of workload effects on fault latency in real-time systems
NASA Technical Reports Server (NTRS)
Woodbury, Michael H.; Shin, Kang G.
1990-01-01
The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.
The Design of a Fault-Tolerant COTS-Based Bus Architecture
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Burt, John B.; Tai, Ann T.
1999-01-01
In this paper, we report our experiences and findings on the design of a fault-tolerant bus architecture comprised of two COTS buses, the IEEE 1394 and the 12C. This fault-tolerant bus is the backbone system bus for the avionics architecture of the X2000 program at the Jet Propulsion Laboratory. COTS buses are attractive because of the availability of low cost commercial products. However, they are not specifically designed for highly reliable applications such as long-life deep-space missions. The X2000 design team has devised a multi-level fault tolerance approach to compensate for this shortcoming of COTS buses. First, the approach enhances the fault tolerance capabilities of the IEEE 1394 and 12 C buses by adding a layer of fault handling hardware and software. Second, algorithms are developed to enable the IEEE 1394 and the 12 C buses assist each other to isolate and recovery from faults. Third, the set of IEEE 1394 and 12 C buses is duplicated to further enhance system reliability. The X2000 design team has paid special attention to guarantee that all fault tolerance provisions will not cause the bus design to deviate from the commercial standard specifications. Otherwise, the economic attractiveness of using COTS will be diminished. The hardware and software design of the X2000 fault-tolerant bus are being implemented and flight hardware will be delivered to the ST4 and Europa Orbiter missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatti, A.A.
1990-04-01
This paper examines the effects of primary and secondary fault quantities as well s of mutual couplings of neighboring circuits on the sensitivity of operation and threshold settings of a microcomputer based differential protection of UHV lines under selective phase switching. Microcomputer based selective phase switching allows the disconnection of minimum number of phases involved in a fault and requires the autoreclosing of these phases immediately after the extinction of secondary arc. During a primary fault a heavy current contribution to the healthy phases tends to cause an unwanted tripping. Faulty phases physically disconnected constitute an isolated fault which beingmore » coupled to the system affects the current and voltage levels of the healthy phases still retained in the system and may cause an unwanted tripping. The microcomputer based differential protection, appears to have poor performance when applied to uncompensated lines employing selective pole switching.« less
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.
1990-01-01
The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
An architecture for object-oriented intelligent control of power systems in space
NASA Technical Reports Server (NTRS)
Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.
1993-01-01
A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.
Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).
NASA Astrophysics Data System (ADS)
Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.
2017-04-01
The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2015-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.
Is there a "blind" strike-slip fault at the southern end of the San Jacinto Fault system?
NASA Astrophysics Data System (ADS)
Tymofyeyeva, E.; Fialko, Y. A.
2015-12-01
We have studied the interseismic deformation at the southern end of the San Jacinto fault system using Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data. To complement the continuous GPS measurements from the PBO network, we have conducted campaign-style GPS surveys of 19 benchmarks along Highway 78 in the years 2012, 2013, and 2014. We processed the campaign GPS data using GAMIT to obtain horizontal velocities. The data show high velocity gradients East of the surface trace of the Coyote Creek Fault. We also processed InSAR data from the ascending and descending tracks of the ENVISAT mission between the years 2003 and 2010. The InSAR data were corrected for atmospheric artifacts using an iterative common point stacking method. We combined average velocities from different look angles to isolate the fault-parallel velocity field, and used fault-parallel velocities to compute strain rate. We filtered the data over a range of wavelengths prior to numerical differentiation, to reduce the effects of noise and to investigate both shallow and deep sources of deformation. At spatial wavelengths less than 2km the strain rate data show prominent anomalies along the San Andreas and Superstition Hills faults, where shallow creep has been documented by previous studies. Similar anomalies are also observed along parts of the Coyote Creek Fault, San Felipe Fault, and an unmapped southern continuation of the Clark strand of the San Jacinto Fault. At wavelengths on the order of 20km, we observe elevated strain rates concentrated east of the Coyote Creek Fault. The long-wavelength strain anomaly east of the Coyote Creek Fault, and the localized shallow creep observed in the short-wavelength strain rate data over the same area suggest that there may be a "blind" segment of the Clark Fault that accommodates a significant portion of the deformation on the southern end of the San Jacinto Fault.
Ground Motion Synthetics For Spontaneous Versus Prescribed Rupture On A 45(o) Thrust Fault
NASA Astrophysics Data System (ADS)
Gottschämmer, E.; Olsen, K. B.
We have compared prescribed (kinematic) and spontaneous dynamic rupture propaga- tion on a 45(o) dipping thrust fault buried up to 5 km in a half-space model, as well as ground motions on the free surface for frequencies less than 1 Hz. The computa- tions are carried out using a 3D finite-difference method with rate-and-state friction on a planar, 20 km by 20 km fault. We use a slip-weakening distance of 15 cm and a slip- velocity weakening distance of 9.2 cm/s, similar to those for the dynamic study for the 1994 M6.7 Northridge earthquake by Nielsen and Olsen (2000) which generated satis- factory fits to selected strong motion data in the San Fernando Valley. The prescribed rupture propagation was designed to mimic that of the dynamic simulation at depth in order to isolate the dynamic free-surface effects. In this way, the results reflect the dy- namic (normal-stress) interaction with the free surface for various depths of burial of the fault. We find that the moment, peak slip and peak sliprate for the rupture breaking the surface are increased by up to 60%, 80%, and 10%, respectively, compared to the values for the scenario buried 5 km. The inclusion of these effects increases the peak displacements and velocities above the fault by factors up 3.4 and 2.9 including the increase in moment due to normal-stress effects at the free surface, and up to 2.1 and 2.0 when scaled to a Northridge-size event with surface rupture. Similar differences were found by Aagaard et al. (2001). Significant dynamic effects on the ground mo- tions include earlier arrival times caused by super-shear rupture velocities (break-out phases), in agreement with the dynamic finite-element simulations by Oglesby et al. (1998, 2000). The presence of shallow low-velocity layers tend to increase the rup- ture time and the sliprate. In particular, they promote earlier transitions to super-shear velocities and decrease the rupture velocity within the layers. Our results suggest that dynamic interaction with the free surface can significantly affect the ground motion for faults buried less than 1-3 km. We therefore recommend that strong ground motion for these scenarios be computed including such dynamic rupture effects.
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
Van Noten, Koen; Lecocq, Thomas; Shah, Anjana K.; Camelbeeck, Thierry
2015-01-01
Between 12 July 2008 and 18 January 2010 a seismic swarm occurred close to the town of Court-Saint-Etienne, 20 km SE of Brussels (Belgium). The Belgian network and a temporary seismic network covering the epicentral area established a seismic catalogue in which magnitude varies between ML -0.7 and ML 3.2. Based on waveform cross-correlation of co-located earthquakes, the spatial distribution of the hypocentre locations was improved considerably and shows a dense cluster displaying a 200 m-wide, 1.5-km long, NW-SE oriented fault structure at a depth range between 5 and 7 km, located in the Cambrian basement rocks of the Lower Palaeozoic Anglo-Brabant Massif. Waveform comparison of the largest events of the 2008–2010 swarm with an ML 4.0 event that occurred during swarm activity between 1953 and 1957 in the same region shows similar P- and S-wave arrivals at the Belgian Uccle seismic station. The geometry depicted by the hypocentral distribution is consistent with a nearly vertical, left-lateral strike-slip fault taking place in a current local WNW–ESE oriented local maximum horizontal stress field. To determine a relevant tectonic structure, a systematic matched filtering approach of aeromagnetic data, which can approximately locate isolated anomalies associated with hypocentral depths, has been applied. Matched filtering shows that the 2008–2010 seismic swarm occurred along a limited-sized fault which is situated in slaty, low-magnetic rocks of the Mousty Formation. The fault is bordered at both ends with obliquely oriented magnetic gradients. Whereas the NW end of the fault is structurally controlled, its SE end is controlled by a magnetic gradient representing an early-orogenic detachment fault separating the low-magnetic slaty Mousty Formation from the high-magnetic Tubize Formation. The seismic swarm is therefore interpreted as a sinistral reactivation of an inherited NW–SE oriented isolated fault in a weakened crust within the Cambrian core of the Brabant Massif.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Effects of shear load on frictional healing
NASA Astrophysics Data System (ADS)
Ryan, K. L.; Marone, C.
2014-12-01
During the seismic cycle of repeated earthquake failure, faults regain strength in a process known as frictional healing. Laboratory studies have played a central role in illuminating the processes of frictional healing and fault re-strengthening. These studies have also provided the foundation for laboratory-derived friction constitutive laws, which have been used extensively to model earthquake dynamics. We conducted laboratory experiments to assess the affect of shear load on frictional healing. Frictional healing is quantified during slide-hold-slide (SHS) tests, which serve as a simple laboratory analog for the seismic cycle in which earthquakes (slide) are followed by interseismic quiescence (hold). We studied bare surfaces of Westerly granite and layers of Westerly granite gouge (thickness of 3 mm) at normal stresses from 4-25 MPa, relative humidity of 40-60%, and loading and unloading velocities of 10-300 μm/s. During the hold period of SHS tests, shear stress on the sample was partially removed to investigate the effects of shear load on frictional healing and to isolate time- and slip-dependent effects on fault healing. Preliminary results are consistent with existing works and indicate that frictional healing increases with the logarithm of hold time and decreases with normalized shear stress τ/τf during the hold. During SHS tests with hold periods of 100 seconds, healing values ranged from (0.013-0.014) for τ/τf = 1 to (0.059-0.063) for τ/τf = 0, where τ is the shear stress during the hold period and τf is the shear stress during steady frictional sliding. Experiments on bare rock surfaces and with natural and synthetic fault gouge materials are in progress. Conventional SHS tests (i.e. τ/τf = 1) are adequately described by the rate and state friction laws. However, previous experiments in granular quartz suggest that zero-stress SHS tests are not well characterized by either the Dieterich or Ruina state evolution laws. We are investigating the processes that produce shear stress dependent frictional healing, alternate forms of the state evolution law, and comparing results for friction of bare rock surfaces and granular fault gouge.
NASA Astrophysics Data System (ADS)
Chen, Xiaowang; Feng, Zhipeng
2016-12-01
Planetary gearboxes are widely used in many sorts of machinery, for its large transmission ratio and high load bearing capacity in a compact structure. Their fault diagnosis relies on effective identification of fault characteristic frequencies. However, in addition to the vibration complexity caused by intricate mechanical kinematics, volatile external conditions result in time-varying running speed and/or load, and therefore nonstationary vibration signals. This usually leads to time-varying complex fault characteristics, and adds difficulty to planetary gearbox fault diagnosis. Time-frequency analysis is an effective approach to extracting the frequency components and their time variation of nonstationary signals. Nevertheless, the commonly used time-frequency analysis methods suffer from poor time-frequency resolution as well as outer and inner interferences, which hinder accurate identification of time-varying fault characteristic frequencies. Although time-frequency reassignment improves the time-frequency readability, it is essentially subject to the constraints of mono-component and symmetric time-frequency distribution about true instantaneous frequency. Hence, it is still susceptible to erroneous energy reallocation or even generates pseudo interferences, particularly for multi-component signals of highly nonlinear instantaneous frequency. In this paper, to overcome the limitations of time-frequency reassignment, we propose an improvement with fine time-frequency resolution and free from interferences for highly nonstationary multi-component signals, by exploiting the merits of iterative generalized demodulation. The signal is firstly decomposed into mono-components of constant frequency by iterative generalized demodulation. Time-frequency reassignment is then applied to each generalized demodulated mono-component, obtaining a fine time-frequency distribution. Finally, the time-frequency distribution of each signal component is restored and superposed to get the time-frequency distribution of original signal. The proposed method is validated using both numerical simulated and lab experimental planetary gearbox vibration signals. The time-varying gear fault symptoms are successfully extracted, showing effectiveness of the proposed iterative generalized time-frequency reassignment method in planetary gearbox fault diagnosis under nonstationary conditions.
Hardware-in-the-loop grid simulator system and method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos
A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises anmore » improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.« less
Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults
McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.
2012-01-01
Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
NASA Technical Reports Server (NTRS)
Wilson, Edward; Sutter, David W.; Berkovitz, Dustin; Betts, Bradley J.; Kong, Edmund; delMundo, Rommel; Lages, Christopher R.; Mah, Robert W.; Papasin, Richard
2003-01-01
By analyzing the motions of a thruster-controlled spacecraft, it is possible to provide on-line (1) thruster fault detection and isolation (FDI), and (2) vehicle mass- and thruster-property identification (ID). Technologies developed recently at NASA Ames have significantly improved the speed and accuracy of these ID and FDI capabilities, making them feasible for application to a broad class of spacecraft. Since these technologies use existing sensors, the improved system robustness and performance that comes with the thruster fault tolerance and system ID can be achieved through a software-only implementation. This contrasts with the added cost, mass, and hardware complexity commonly required by FDI. Originally developed in partnership with NASA - Johnson Space Center to provide thruster FDI capability for the X-38 during re-entry, these technologies are most recently being applied to the MIT SPHERES experimental spacecraft to fly on the International Space Station in 2004. The model-based FDI uses a maximum-likelihood calculation at its core, while the ID is based upon recursive least squares estimation. Flight test results from the SPHERES implementation, as flown aboard the NASA KC-1 35A 0-g simulator aircraft in November 2003 are presented.
A Cooperative Approach to Virtual Machine Based Fault Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton III, Thomas J; Engelmann, Christian; Vallee, Geoffroy R
Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM).more » We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.« less
A real-time expert system for self-repairing flight control
NASA Technical Reports Server (NTRS)
Gaither, S. A.; Agarwal, A. K.; Shah, S. C.; Duke, E. L.
1989-01-01
An integrated environment for specifying, prototyping, and implementing a self-repairing flight-control (SRFC) strategy is described. At an interactive workstation, the user can select paradigms such as rule-based expert systems, state-transition diagrams, and signal-flow graphs and hierarchically nest them, assign timing and priority attributes, establish blackboard-type communication, and specify concurrent execution on single or multiple processors. High-fidelity nonlinear simulations of aircraft and SRFC systems can be performed off-line, with the possibility of changing SRFC rules, inference strategies, and other heuristics to correct for control deficiencies. Finally, the off-line-generated SRFC can be transformed into highly optimized application-specific real-time C-language code. An application of this environment to the design of aircraft fault detection, isolation, and accommodation algorithms is presented in detail.
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince
1987-01-01
Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.
Method and apparatus for transfer function simulator for testing complex systems
NASA Technical Reports Server (NTRS)
Kavaya, M. J. (Inventor)
1985-01-01
A method and apparatus for testing the operation of a complex stabilization circuit in a closed loop system is presented. The method is comprised of a programmed analog or digital computing system for implementing the transfer function of a load thereby providing a predictable load. The digital computing system employs a table stored in a microprocessor in which precomputed values of the load transfer function are stored for values of input signal from the stabilization circuit over the range of interest. This technique may be used not only for isolating faults in the stabilization circuit, but also for analyzing a fault in a faulty load by so varying parameters of the computing system as to simulate operation of the actual load with the fault.
A new fault diagnosis algorithm for AUV cooperative localization system
NASA Astrophysics Data System (ADS)
Shi, Hongyang; Miao, Zhiyong; Zhang, Yi
2017-10-01
Multiple AUVs cooperative localization as a new kind of underwater positioning technology, not only can improve the positioning accuracy, but also has many advantages the single AUV does not have. It is necessary to detect and isolate the fault to increase the reliability and availability of the AUVs cooperative localization system. In this paper, the Extended Multiple Model Adaptive Cubature Kalmam Filter (EMMACKF) method is presented to detect the fault. The sensor failures are simulated based on the off-line experimental data. Experimental results have shown that the faulty apparatus can be diagnosed effectively using the proposed method. Compared with Multiple Model Adaptive Extended Kalman Filter and Multi-Model Adaptive Unscented Kalman Filter, both accuracy and timelines have been improved to some extent.
Kinematics of shallow backthrusts in the Seattle fault zone, Washington State
Pratt, Thomas L.; Troost, K.G.; Odum, Jackson K.; Stephenson, William J.
2015-01-01
Near-surface thrust fault splays and antithetic backthrusts at the tips of major thrust fault systems can distribute slip across multiple shallow fault strands, complicating earthquake hazard analyses based on studies of surface faulting. The shallow expression of the fault strands forming the Seattle fault zone of Washington State shows the structural relationships and interactions between such fault strands. Paleoseismic studies document an ∼7000 yr history of earthquakes on multiple faults within the Seattle fault zone, with some backthrusts inferred to rupture in small (M ∼5.5–6.0) earthquakes at times other than during earthquakes on the main thrust faults. We interpret seismic-reflection profiles to show three main thrust faults, one of which is a blind thrust fault directly beneath downtown Seattle, and four small backthrusts within the Seattle fault zone. We then model fault slip, constrained by shallow deformation, to show that the Seattle fault forms a fault propagation fold rather than the alternatively proposed roof thrust system. Fault slip modeling shows that back-thrust ruptures driven by moderate (M ∼6.5–6.7) earthquakes on the main thrust faults are consistent with the paleoseismic data. The results indicate that paleoseismic data from the back-thrust ruptures reveal the times of moderate earthquakes on the main fault system, rather than indicating smaller (M ∼5.5–6.0) earthquakes involving only the backthrusts. Estimates of cumulative shortening during known Seattle fault zone earthquakes support the inference that the Seattle fault has been the major seismic hazard in the northern Cascadia forearc in the late Holocene.
Taking apart the Big Pine fault: Redefining a major structural feature in southern California
Onderdonk, N.W.; Minor, S.A.; Kellogg, K.S.
2005-01-01
New mapping along the Big Pine fault trend in southern California indicates that this structural alignment is actually three separate faults, which exhibit different geometries, slip histories, and senses of offset since Miocene time. The easternmost fault, along the north side of Lockwood Valley, exhibits left-lateral reverse Quaternary displacement but was a north dipping normal fault in late Oligocene to early Miocene time. The eastern Big Pine fault that bounds the southern edge of the Cuyama Badlands is a south dipping reverse fault that is continuous with the San Guillermo fault. The western segment of the Big Pine fault trend is a north dipping thrust fault continuous with the Pine Mountain fault and delineates the northern boundary of the rotated western Transverse Ranges terrane. This redefinition of the Big Pine fault differs greatly from the previous interpretation and significantly alters regional tectonic models and seismic risk estimates. The outcome of this study also demonstrates that basic geologic mapping is still needed to support the development of geologic models. Copyright 2005 by the American Geophysical Union.
Study of fault tolerant software technology for dynamic systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Zacharias, G. L.
1985-01-01
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.
TDAS: The Thermal Expert System (TEXSYS) data acquisition system
NASA Technical Reports Server (NTRS)
Hack, Edmund C.; Healey, Kathleen J.
1987-01-01
As part of the NASA Systems Autonomy Demonstration Project, a thermal expert system (TEXSYS) is being developed. TEXSYS combines a fast real time control system, a sophisticated human interface for the user and several distinct artificial intelligence techniques in one system. TEXSYS is to provide real time control, operations advice and fault detection, isolation and recovery capabilities for the space station Thermal Test Bed (TTB). TEXSYS will be integrated with the TTB and act as an intelligent assistant to thermal engineers conducting TTB tests and experiments. The results are presented from connecting the real time controller to the knowledge based system thereby creating an integrated system. Special attention will be paid to the problem of filtering and interpreting the raw, real time data and placing the important values into the knowledge base of the expert system.
M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey
Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.
2016-01-01
We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.
Fault diagnosis for diesel valve trains based on time frequency images
NASA Astrophysics Data System (ADS)
Wang, Chengdong; Zhang, Youyun; Zhong, Zhenyuan
2008-11-01
In this paper, the Wigner-Ville distributions (WVD) of vibration acceleration signals which were acquired from the cylinder head in eight different states of valve train were calculated and displayed in grey images; and the probabilistic neural networks (PNN) were directly used to classify the time-frequency images after the images were normalized. By this way, the fault diagnosis of valve train was transferred to the classification of time-frequency images. As there is no need to extract further fault features (such as eigenvalues or symptom parameters) from time-frequency distributions before classification, the fault diagnosis process is highly simplified. The experimental results show that the faults of diesel valve trains can be classified accurately by the proposed methods.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Sowers, T. Shane; Maul, William A.
2005-01-01
The constraints of future Exploration Missions will require unique Integrated System Health Management (ISHM) capabilities throughout the mission. An ambitious launch schedule, human-rating requirements, long quiescent periods, limited human access for repair or replacement, and long communication delays all require an ISHM system that can span distinct yet interdependent vehicle subsystems, anticipate failure states, provide autonomous remediation, and support the Exploration Mission from beginning to end. NASA Glenn Research Center has developed and applied health management system technologies to aerospace propulsion systems for almost two decades. Lessons learned from past activities help define the approach to proper ISHM development: sensor selection- identifies sensor sets required for accurate health assessment; data qualification and validation-ensures the integrity of measurement data from sensor to data system; fault detection and isolation-uses measurements in a component/subsystem context to detect faults and identify their point of origin; information fusion and diagnostic decision criteria-aligns data from similar and disparate sources in time and use that data to perform higher-level system diagnosis; and verification and validation-uses data, real or simulated, to provide variable exposure to the diagnostic system for faults that may only manifest themselves in actual implementation, as well as faults that are detectable via hardware testing. This presentation describes a framework for developing health management systems and highlights the health management research activities performed by the Controls and Dynamics Branch at the NASA Glenn Research Center. It illustrates how those activities contribute to the development of solutions for Integrated System Health Management.
A System for Fault Management for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.
NASA Astrophysics Data System (ADS)
Falcucci, E.; Gori, S.
2015-12-01
The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time intervals to assess active and capable faults for engineering practices in Italy. Eng. Geol., 139/140, 50-65.
Irregular earthquake recurrence patterns and slip variability on a plate-boundary Fault
NASA Astrophysics Data System (ADS)
Wechsler, N.; Rockwell, T. K.; Klinger, Y.
2015-12-01
The Dead Sea fault in the Levant represents a simple, segmented plate boundary from the Gulf of Aqaba northward to the Sea of Galilee, where it changes its character into a complex plate boundary with multiple sub-parallel faults in northern Israel, Lebanon and Syria. The studied Jordan Gorge (JG) segment is the northernmost part of the simple section, before the fault becomes more complex. Seven fault-crossing buried paleo-channels, offset by the Dead Sea fault, were investigated using paleoseismic and geophysical methods. The mapped offsets capture the long-term rupture history and slip-rate behavior on the JG fault segment for the past 4000 years. The ~20 km long JG segment appears to be more active (in term of number of earthquakes) than its neighboring segments to the south and north. The rate of movement on this segment varies considerably over the studied period: the long-term slip-rate for the entire 4000 years is similar to previously observed rates (~4 mm/yr), yet over shorter time periods the rate varies from 3-8 mm/yr. Paleoseismic data on both timing and displacement indicate a high COV >1 (clustered) with displacement per event varying by nearly an order of magnitude. The rate of earthquake production does not produce a time predictable pattern over a period of 2 kyr. We postulate that the seismic behavior of the JG fault is influenced by stress interactions with its neighboring faults to the north and south. Coulomb stress modelling demonstrates that an earthquake on any neighboring fault will increase the Coulomb stress on the JG fault and thus promote rupture. We conclude that deriving on-fault slip-rates and earthquake recurrence patterns from a single site and/or over a short time period can produce misleading results. The definition of an adequately long time period to resolve slip-rate is a question that needs to be addressed and requires further work.
Reclosing operation characteristics of the flux-coupling type SFCL in a single-line-to ground fault
NASA Astrophysics Data System (ADS)
Jung, B. I.; Cho, Y. S.; Choi, H. S.; Ha, K. H.; Choi, S. G.; Chul, D. C.; Sung, T. H.
2011-11-01
The recloser that is used in distribution systems is a relay system that behaves sequentially to protect power systems from transient and continuous faults. This reclosing operation of the recloser can improve the reliability and stability of the power supply. For cooperation with this recloser, the superconducting fault current limiter (SFCL) must properly perform the reclosing operation. This paper analyzed the reclosing operation characteristics of the three-phase flux-coupling type SFCL in the event of a ground fault. The fault current limiting characteristics according to the changing number of turns of the primary and secondary coils were examined. As the number of turns of the first coil increased, the first maximum fault current decreased. Furthermore, the voltage of the quenched superconducting element also decreased. This means that the power burden of the superconducting element decreases based on the increasing number of turns of the primary coil. The fault current limiting characteristic of the SFCL according to the reclosing time limited the fault current within a 0.5 cycles (8 ms), which is shorter than the closing time of the recloser. In other words, the superconducting element returned to the superconducting state before the second fault and normally performed the fault current limiting operation. If the SFCL did not recover before the recloser reclosing time, the normal current that was flowing in the transmission line after the recovery of the SFCL from the fault would have been limited and would have caused losses. Therefore, the fast recovery time of a SFCL is critical to its cooperation with the protection system.
Artificial neural network application for space station power system fault diagnosis
NASA Technical Reports Server (NTRS)
Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.
1995-01-01
This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
Spatiotemporal Patterns of Fault Slip Rates Across the Central Sierra Nevada Frontal Fault Zone
NASA Astrophysics Data System (ADS)
Rood, D. H.; Burbank, D.; Finkel, R. C.
2010-12-01
We examine patterns in fault slip rates through time and space across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38-39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and Be-10 surface exposure dating, we define mean fault slip rates, and by utilizing markers of different ages (generally, ~20 ka and ~150 ka), we examine rates through time and interactions among multiple faults over 10-100 ky timescales. At each site for which data are available for the last ~150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~20 ky and ~150 ky timescales): 0.3 ± 0.1 mm/yr (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 +0.3/-0.1 mm/yr along the West Fork of the Carson River at Woodfords. Our data permit that rates are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~20 km between the northern Mono Basin (1.3 +0.6/-0.3 mm/yr at Lundy Canyon site) and the Bridgeport Basin (0.3 ± 0.1 mm/yr). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin reflects a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveal that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of normal and oblique faults, with extension rates increasing northward on the Fish Lake Valley fault. Where faults of the Eastern California Shear Zone terminate northward into the Mina Deflection, extension rates increase northward along the Sierra Nevada frontal fault zone to ~0.7 mm/yr in northern Mono Basin. This spatial pattern suggests that extension is transferred from faults systems to the east (e.g. Fish Lake Valley fault) and localized on the Sierra Nevada frontal fault zone as Eastern California Shear Zone-Walker Lane belt faulting is transferred through the Mina Deflection.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Smith, Deborah K; Cann, Johnson R; Escartín, Javier
2006-07-27
Oceanic core complexes are massifs in which lower-crustal and upper-mantle rocks are exposed at the sea floor. They form at mid-ocean ridges through slip on detachment faults rooted below the spreading axis. To date, most studies of core complexes have been based on isolated inactive massifs that have spread away from ridge axes. Here we present a survey of the Mid-Atlantic Ridge near 13 degrees N containing a segment in which a number of linked detachment faults extend for 75 km along one flank of the spreading axis. The detachment faults are apparently all currently active and at various stages of development. A field of extinct core complexes extends away from the axis for at least 100 km. Our observations reveal the topographic characteristics of actively forming core complexes and their evolution from initiation within the axial valley floor to maturity and eventual inactivity. Within the surrounding region there is a strong correlation between detachment fault morphology at the ridge axis and high rates of hydroacoustically recorded earthquake seismicity. Preliminary examination of seismicity and seafloor morphology farther north along the Mid-Atlantic Ridge suggests that active detachment faulting is occurring in many segments and that detachment faulting is more important in the generation of ocean crust at this slow-spreading ridge than previously suspected.
NASA Astrophysics Data System (ADS)
Schuba, C. Nur; Gray, Gary G.; Morgan, Julia K.; Sawyer, Dale S.; Shillington, Donna J.; Reston, Tim J.; Bull, Jonathan M.; Jordan, Brian E.
2018-06-01
A new 3-D seismic reflection volume over the Galicia margin continent-ocean transition zone provides an unprecedented view of the prominent S-reflector detachment fault that underlies the outer part of the margin. This volume images the fault's structure from breakaway to termination. The filtered time-structure map of the S-reflector shows coherent corrugations parallel to the expected paleo-extension directions with an average azimuth of 107°. These corrugations maintain their orientations, wavelengths and amplitudes where overlying faults sole into the S-reflector, suggesting that the parts of the detachment fault containing multiple crustal blocks may have slipped as discrete units during its late stages. Another interface above the S-reflector, here named S‧, is identified and interpreted as the upper boundary of the fault zone associated with the detachment fault. This layer, named the S-interval, thickens by tens of meters from SE to NW in the direction of transport. Localized thick accumulations also occur near overlying fault intersections, suggesting either non-uniform fault rock production, or redistribution of fault rock during slip. These observations have important implications for understanding how detachment faults form and evolve over time. 3-D seismic reflection imaging has enabled unique insights into fault slip history, fault rock production and redistribution.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumsdaine, Andrew
2013-03-08
The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less
NASA Astrophysics Data System (ADS)
Trippetta, Fabio; Scuderi, Marco Maria; Collettini, Cristiano
2015-04-01
Physical properties of fault zones vary with time and space and in particular, fluid flow and permeability variations are strictly related to fault zone processes. Here we investigate the physical properties of carbonate samples collected along the Monte Maggio normal Fault (MMF), a regional structure (length ~10 km and displacement ~500 m) located within the active system of the Apennines. In particular we have studied an exceptionally exposed outcrop of the fault within the Calcare Massiccio formation (massive limestone) that has been recently exposed by new roadworks. Large cores (100 mm in diameter and up to 20 cm long) drilled perpendicular to the fault plane have been used to: 1) characterize the damage zone adjacent to the fault plane and 2) to obtain smaller cores, 38 mm in diameter both parallel and perpendicular to the fault plane, for rock deformation experiments. At the mesoscale two types of cataclastic damage zones can be identified in the footwall block (i) a Cemented Cataclasite (CC) and (ii), a Fault Breccia (FB). Since in some portions of the fault the hangingwall (HW) is still preserved we also collected HW samples. After preliminary porosity measurements at ambient pressure, we performed laboratory measurements of Vp, Vs, and permeability at effective confining pressures up to 100 MPa in order to simulate crustal conditions. The protolith has a primary porosity of about 7 %, formed predominantly by isolated pores since the connected porosity is only 1%. FB samples are characterized by 10% and 5% of bulk and connected porosity respectively, whilst CC samples show lower bulk porosity (7%) and a connected porosity of 2%. From ambient pressure to 100 MPa, P-wave velocity is about 5,9-6,0 km/s for the protolith, ranges from 4,9 km/s to 5,9 km/s for FB samples, whereas it is constant at 5,9 km/s for CC samples and ranges from 5,4 to 5,7 for HW sample. Vs shows the same behaviour resulting in a constant Vp/Vs ratio from 0 to 100 MPa that ranges from 1,5 to 1,98 where the lower values are recorded for FB samples. Permeability of FB samples is pressure dependent starting from 10-17 m2 at ambient pressure to 10-18 m2 at 100 MPa confining pressure. In contrast, for CC samples, permeability is about 10-19 m2 and is pressure independent. In conclusion, our dataset depicts a fault zone structure with heterogeneous static physical and transport properties that are controlled by the occurrence of different deformation mechanisms related to different protolites. At the moment we have been conducting experiments during loading/unloading stress cycles in order to characterize possible permeability and acoustic properties evolution induced by differential stress.
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
NASA Astrophysics Data System (ADS)
Wu, Kongyou; Pei, Yangwen; Li, Tianran; Wang, Xulong; Liu, Yin; Liu, Bo; Ma, Chao; Hong, Mei
2018-03-01
The Daerbute fault zone, located in the northwestern margin of the Junggar basin, in the Central Asian Orogenic Belt, is a regional strike-slip fault with a length of 400 km. The NE-SW trending Daerbute fault zone presents a distinct linear trend in plain view, cutting through both the Zair Mountain and the Hala'alate Mountain. Because of the intense contraction and shearing, the rocks within the fault zone experienced high degree of cataclasis, schistosity, and mylonization, resulting in rocks that are easily eroded to form a valley with a width of 300-500 m and a depth of 50-100 m after weathering and erosion. The well-exposed outcrops along the Daerbute fault zone present sub-horizontal striations and sub-vertical fault steps, indicating sub-horizontal shearing along the observed fault planes. Flower structures and horizontal drag folds are also observed in both the well-exposed outcrops and high-resolution satellite images. The distribution of accommodating strike-slip splay faults, e.g., the 973-pluton fault and the Great Jurassic Trough fault, are in accordance with the Riedel model of simple shear. The seismic and time-frequency electromagnetic (TFEM) sections also demonstrate the typical strike-slip characteristics of the Daerbute fault zone. Based on detailed field observations of well-exposed outcrops and seismic sections, the Daerbute fault can be subdivided into two segments: the western segment presents multiple fault cores and damage zones, whereas the eastern segment only presents a single fault core, in which the rocks experienced a higher degree of rock cataclasis, schistosity, and mylonization. In the central overlapping portion between the two segments, the sediments within the fault zone are primarily reddish sandstones, conglomerates, and some mudstones, of which the palynological tests suggest middle Permian as the timing of deposition. The deformation timing of the Daerbute fault was estimated by integrating the depocenters' basinward migration and initiation of the splay faults (e.g., the Great Jurassic Trough fault and the 973-pluton fault). These results indicate that there were probably two periods of faulting deformation for the Daerbute fault. By integrating our study with previous studies, we speculate that the Daerbute fault experienced a two-phase strike-slip faulting deformation, commencing with the initial dextral strike-slip faulting in mid-late Permian, and then being inversed to sinistral strike-slip faulting since the Triassic. The results of this study can provide useful insights for the regional tectonics and local hydrocarbon exploration.
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
NASA Technical Reports Server (NTRS)
Stiffler, J. J.; Bryant, L. A.; Guccione, L.
1979-01-01
A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.
Development of a space-systems network testbed
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas
1988-01-01
This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.
Fault Detection/Isolation Verification,
1982-08-01
63 - A I MCC ’I UNCLASSIFIED SECURITY CLASSIPICATION OP THIS PAGE tMh*f Dal f&mered, REPORT D00CUMENTATION PAGE " .O ORM 1. REPORT NUM.9ft " 2. GOVT...test the performance of th .<ver) DO 2" 1473 EoIoTON OP iNov os i OSoLTe UNCLASSIFIED SECURITY CLASSIPICATION 0 T"IS PAGE (P 3 . at Sted) I...UNCLASSIFIED Acumy, C .AMICATIN Of THIS PAGS. (m ... DO&.m , Algorithm on these netowrks , several different fault scenarios were designed for each network. Each
NASA Astrophysics Data System (ADS)
Gürer, Derya; Darin, Michael H.; van Hinsbergen, Douwe J. J.; Umhoefer, Paul J.
2017-04-01
Because subduction is a destructive process, the surface record of subduction-dominated systems is naturally incomplete. Sedimentary basins may hold the most complete record of processes related to subduction, accretion, collision, and ocean closure, and thus provide key information for understanding the kinematic evolution of orogens. In central and eastern Anatolia, the Late Cretaceous-Paleogene stratigraphic record of the Ulukışla and Sivas basins supports the hypothesis that these once formed a contiguous basin. Importantly, their age and geographic positions relative to their very similar basement units and ahead of the Arabian indenter provide a critical record of pre-, syn- and post-collisional processes in the Anatolian Orogen. The Ulukışla-Sivas basin was dissected and translated along the major left-lateral Ecemiş fault zone. Since then, the basins on either side of the fault evolved independently, with considerably more plate convergence accommodated to the east in the Sivas region (eastern Anatolia) than in the Ulukışla region (central Anatolia). This led to the deformation of marine sediments and underlying ophiolites and structural growth of the Sivas Fold-and-Thrust Belt (SSFTB) since latest Eocene time, which played a major role in marine basin isolation and disconnection, along with a regionally important transition to continental conditions with evaporite deposition starting in the early Oligocene. We use geologic mapping, fault kinematic analysis, paleomagnetism, apatite fission track (AFT) thermochronology, and 40Ar/39Ar geochronology to characterize the architecture, deformation style, and structural evolution of the region. In the Ulukışla basin, dominantly E-W trending normal faults became folded or inverted due to N-S contraction since the Lutetian (middle Eocene). This was accompanied by significant counter-clockwise rotations, and post-Lutetian burial of the Niǧde Massif along the transpressional Ecemiş fault zone. Since Miocene time, the Ecemiş fault zone has been active as an extensional structure responsible for the re-exhumation of the Niǧde Massif in its footwall. To the east and in front of the Arabian indenter, the Sivas Basin evolved during Paleogene collision of the Tauride micro-continent (Africa) with the Pontides (Eurasia), but prior to Arabia collision. The thin-skinned SSFTB is a >300 km-long by 30 km-wide E-W-elongate, convex-north arcuate belt of compressional structures in Late Cretaceous to Miocene strata. It is characterized by NE- to E-trending upright folds with slight northward asymmetry, south-dipping thrust faults, and overturned folds in Paleogene strata indicating predominantly northward vergence. Several thrusts are south-vergent, typically displacing younger (Miocene) units. Structural relationships and AFT data reveal a sequence of initial crustal shortening and rapid exhumation in the late Eocene and Oligocene, an early-middle Miocene phase of relative tectonic quiescence and regional unconformity development, and a final episode of contraction during the late Miocene. Pliocene and younger units are only locally deformed by either halokinesis or transpression along diffuse and low-strain faults. Paleomagnetic data from the SSFTB reveal significant counter-clockwise rotations since Eocene time. Miocene strata north of the SSFTB consistently show moderate clockwise rotations. Our results indicate that collision-related growth of the orogen ended by the latest Miocene, coeval with or shortly after initiation of the North Anatolian fault zone.
Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea
Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham
2011-01-01
The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.
Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform
Tang, Guiji; Tian, Tian; Zhou, Chong
2018-01-01
When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013
Reading a 400,000-year record of earthquake frequency for an intraplate fault
NASA Astrophysics Data System (ADS)
Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.
2017-05-01
Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.
Investigating the creeping section of the San Andreas Fault using ALOS PALSAR interferometry
NASA Astrophysics Data System (ADS)
Agram, P. S.; Wortham, C.; Zebker, H. A.
2010-12-01
In recent years, time-series InSAR techniques have been used to study the temporal characteristics of various geophysical phenomena that produce surface deformation including earthquakes and magma migration in volcanoes. Conventional InSAR and time-series InSAR techniques have also been successfully used to study aseismic creep across faults in urban areas like the Northern Hayward Fault in California [1-3]. However, application of these methods to studying the time-dependent creep across the Central San Andreas Fault using C-band ERS and Envisat radar satellites has resulted in limited success. While these techniques estimate the average long-term far-field deformation rates reliably, creep measurement close to the fault (< 3-4 Km) is virtually impossible due to heavy decorrelation at C-band (6cm wavelength). Shanker and Zebker (2009) [4] used the Persistent Scatterer (PS) time-series InSAR technique to estimate a time-dependent non-uniform creep signal across a section of the creeping segment of the San Andreas Fault. However, the identified PS network was spatially very sparse (1 per sq. km) to study temporal characteristics of deformation of areas close to the fault. In this work, we use L-band (24cm wavelength) SAR data from the PALSAR instrument on-board the ALOS satellite, launched by Japanese Aerospace Exploration Agency (JAXA) in 2006, to study the temporal characteristics of creep across the Central San Andreas Fault. The longer wavelength at L-band improves observed correlation over the entire scene which significantly increased the ground area coverage of estimated deformation in each interferogram but at the cost of decreased sensitivity of interferometric phase to surface deformation. However, noise levels in our deformation estimates can be decreased by combining information from multiple SAR acquisitions using time-series InSAR techniques. We analyze 13 SAR acquisitions spanning the time-period from March 2007 to Dec 2009 using the Short Baseline Subset Analysis (SBAS) time-series InSAR technique [3]. We present detailed comparisons of estimated time-series of fault creep as a function of position along the fault including the locked section around Parkfield, CA. We also present comparisons between the InSAR time-series and GPS network observations in the Parkfield region. During these three years of observation, the average fault creep is estimated to be 35 mm/yr. References [1] Bürgmann,R., E. Fielding and, J. Sukhatme, Slip along the Hayward fault, California, estimated from space-based synthetic aperture radar interferometry, Geology,26, 559-562, 1998. [2] Ferretti, A., C. Prati and F. Rocca, Permanent Scatterers in SAR Interferometry, IEEE Trans. Geosci. Remote Sens., 39, 8-20, 2001. [3] Lanari, R.,F. Casu, M. Manzo, and P. Lundgren, Application of SBAS D- InSAR technique to fault creep: A case study of the Hayward Fault, California. Remote Sensing of Environment, 109(1), 20-28, 2007. [4] Shanker, A. P., and H. Zebker, Edgelist phase unwrapping algorithm for time-series InSAR. J. Opt. Soc. Am. A, 37(4), 2010.
Fault Management Technology Maturation for NASA's Constellation Program
NASA Technical Reports Server (NTRS)
Waterman, Robert D.
2010-01-01
This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.
The Hayward-Rodgers Creek Fault System: Learning from the Past to Forecast the Future
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Lienkaemper, J. J.; Hecker, S.
2007-12-01
The San Francisco Bay area is located within the Pacific-North American plate boundary. As a result, the region has the highest density of active faults per square kilometer of any urban center in the US. Between the Farallon Islands and Livermore, the faults of the San Andreas fault system are slipping at a rate of about 40 mm/yr. Approximately 25 percent of this rate is accommodated by the Hayward fault and its continuation to the north, the Rodgers Creek fault. The Hayward fault extends 88 km from Warm Springs on the south into San Pablo Bay on the north, traversing the most heavily urbanized part of the Bay Area. The Rodgers Creek fault extends another 63 km, passing through Santa Rosa and ending south of Healdsburg. Geologic, seismologic, and geodetic studies during the past ten years have significantly increased our knowledge of this system. In particular, paleoseismic studies of the timing of past earthquakes have provided critical new information for improving our understanding of how these faults may work in time and space, and for estimating the probability of future earthquakes. The most spectacular result is an 11-earthquake record on the southern Hayward fault that extends back to A.D. 170. It suggests an average time interval between large earthquakes of 170 years for this period, with a shorter interval of 140 years for the five most recent earthquakes. Paleoseismic investigations have also shown that prior to the most recent large earthquake on the southern Hayward fault in 1868, large earthquakes occurred on the southern Hayward fault between 1658 and1786, on the northern Hayward fault between 1640 and 1776, and on the Rodgers Creek fault between 1690 and 1776. These could have been three separate earthquakes. However, the overlapping radiocarbon dates for these paleoearthquakes allow the possibility that these faults may have ruptured together in several different combinations: a combined southern and northern Hayward fault earthquake, a Rodgers Creek-northern Hayward fault earthquake, or a rupture of all three fault sections. Each of these rupture combinations would produce a magnitude larger than 1868 (M~6.9). In 2003, the Working Group on California Earthquake Probabilities released a new earthquake forecast for the Bay Area. Using the earthquake timing data and alternative fault rupture models, the Working Group estimated a 27 percent likelihood of a M?6.7 earthquake along the Hayward-Rodgers Creek fault zone by the year 2031. This is this highest probability of any individual fault system in the Bay Area. New paleoseismic data will allow updating of this forecast.
Marín-Lechado, Carlos; Galindo-Zaldívar, Jesús; Gil, Antonio José; Borque, María Jesús; de Lacy, María Clara; Pedrera, Antonio; López-Garrido, Angel Carlos; Alfaro, Pedro; García-Tortosa, Francisco; Ramos, Maria Isabel; Rodríguez-Caderot, Gracia; Rodríguez-Fernández, José; Ruiz-Constán, Ana; de Galdeano-Equiza, Carlos Sanz
2010-01-01
The Campo de Dalias is an area with relevant seismicity associated to the active tectonic deformations of the southern boundary of the Betic Cordillera. A non-permanent GPS network was installed to monitor, for the first time, the fault- and fold-related activity. In addition, two high precision levelling profiles were measured twice over a one-year period across the Balanegra Fault, one of the most active faults recognized in the area. The absence of significant movement of the main fault surface suggests seismogenic behaviour. The possible recurrence interval may be between 100 and 300 y. The repetitive GPS and high precision levelling monitoring of the fault surface during a long time period may help us to determine future fault behaviour with regard to the existence (or not) of a creep component, the accumulation of elastic deformation before faulting, and implications of the fold-fault relationship. PMID:22319309
A dual-processor multi-frequency implementation of the FINDS algorithm
NASA Technical Reports Server (NTRS)
Godiwala, Pankaj M.; Caglayan, Alper K.
1987-01-01
This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luebbert, D.; Arthur, J.; Sztucki, M.; Metzger, T. H.; Griffin, P. B.; Patel, J. R.
2002-10-01
Stacking faults in boron-implanted silicon give rise to streaks or rods of scattered x-ray intensity normal to the stacking fault plane. We have used the diffuse scattering rods to follow the growth of faults as a function of time when boron-implanted silicon is annealed in the range of 925 to 1025 degC. From the growth kinetics we obtain an activation energy for interstitial migration in silicon: EI=1.98plus-or-minus0.06 eV. Fault intensity and size versus time results indicate that faults do not shrink and disappear, but rather are annihilated by a dislocation reaction mechanism.
New constraints on the active tectonic deformation of the Aegean
Nyst, M.; Thatcher, W.
2004-01-01
Site velocities from six separate Global Positioning System (GPS) networks comprising 374 stations have been referred to a single common Eurasia-fixed reference frame to map the velocity distribution over the entire Aegean. We use the GPS velocity field to identify deforming regions, rigid elements, and potential microplate boundaries, and build upon previous work by others to initially specify rigid elements in central Greece, the South Aegean, Anatolia, and the Sea of Marmara. We apply an iterative approach, tentatively defining microplate boundaries, determining best fit rigid rotations, examining misfit patterns, and revising the boundaries to achieve a better match between model and data. Short-term seismic cycle effects are minor contaminants of the data that we remove when necessary to isolate the long-term kinematics. We find that present day Aegean deformation is due to the relative motions of four microplates and straining in several isolated zones internal to them. The RMS misfit of model to data is about 2-sigma, very good when compared to the typical match between coseismic fault models and GPS data. The simplicity of the microplate description of the deformation and its good fit to the GPS data are surprising and were not anticipated by previous work, which had suggested either many rigid elements or broad deforming zones that comprise much of the Aegean region. The isolated deforming zones are also unexpected and cannot be explained by the kinematics of the microplate motions. Strain rates within internally deforming zones are extensional and range from 30 to 50 nanostrain/year (nstrain/year, 10-9/year), 1 to 2 orders of magnitude lower than rates observed across the major microplate boundaries. Lower strain rates may exist elsewhere withi the microplates but are only resolved in Anatolia, where extension of 13 ?? 4 nstrain/ year is required by the data. Our results suggest that despite the detailed complexity of active continental deformation revealed by seismicity, active faulting, fault geomorphology, and earthquake fault plane solutions, continental tectonics, at least in the Aegean, is to first order very similar to global plate tectonics and obeys the same simple kinematic rules. Although the widespread distribution of Aegean seismicity and active faulting might suggest a rather spatially homogeneous seismic hazard, the focusing of deformation near microplate boundaries implies the highest hazard is comparably localized.
Active, capable, and potentially active faults - a paleoseismic perspective
Machette, M.N.
2000-01-01
Maps of faults (geologically defined source zones) may portray seismic hazards in a wide range of completeness depending on which types of faults are shown. Three fault terms - active, capable, and potential - are used in a variety of ways for different reasons or applications. Nevertheless, to be useful for seismic-hazards analysis, fault maps should encompass a time interval that includes several earthquake cycles. For example, if the common recurrence in an area is 20,000-50,000 years, then maps should include faults that are 50,000-100,000 years old (two to five typical earthquake cycles), thus allowing for temporal variability in slip rate and recurrence intervals. Conversely, in more active areas such as plate boundaries, maps showing faults that are <10,000 years old should include those with at least 2 to as many as 20 paleoearthquakes. For the International Lithosphere Programs' Task Group II-2 Project on Major Active Faults of the World our maps and database will show five age categories and four slip rate categories that allow one to select differing time spans and activity rates for seismic-hazard analysis depending on tectonic regime. The maps are accompanied by a database that describes evidence for Quaternary faulting, geomorphic expression, and paleoseismic parameters (slip rate, recurrence interval and time of most recent surface faulting). These maps and databases provide an inventory of faults that would be defined as active, capable, and potentially active for seismic-hazard assessments.
McLaughlin, R.J.; Langenheim, V.E.; Schmidt, K.M.; Jachens, R.C.; Stanley, R.G.; Jayko, A.S.; McDougall, K.A.; Tinsley, J.C.; Valin, Z.C.
1999-01-01
In the southern San Francisco Bay region of California, oblique dextral reverse faults that verge northeastward from the San Andreas fault experienced triggered slip during the 1989 M7.1 Loma Prieta earthquake. The role of these range-front thrusts in the evolution of the San Andreas fault system and the future seismic hazard that they may pose to the urban Santa Clara Valley are poorly understood. Based on recent geologic mapping and geophysical investigations, we propose that the range-front thrust system evolved in conjunction with development of the San Andreas fault system. In the early Miocene, the region was dominated by a system of northwestwardly propagating, basin-bounding, transtensional faults. Beginning as early as middle Miocene time, however, the transtensional faulting was superseded by transpressional NE-stepping thrust and reverse faults of the range-front thrust system. Age constraints on the thrust faults indicate that the locus of contraction has focused on the Monte Vista, Shannon, and Berrocal faults since about 4.8 Ma. Fault slip and fold reconstructions suggest that crustal shortening between the San Andreas fault and the Santa Clara Valley within this time frame is ~21%, amounting to as much as 3.2 km at a rate of 0.6 mm/yr. Rates probably have not remained constant; average rates appear to have been much lower in the past few 100 ka. The distribution of coseismic surface contraction during the Loma Prieta earthquake, active seismicity, late Pleistocene to Holocene fluvial terrace warping, and geodetic data further suggest that the active range-front thrust system includes blind thrusts. Critical unresolved issues include information on the near-surface locations of buried thrusts, the timing of recent thrust earthquake events, and their recurrence in relation to earthquakes on the San Andreas fault.
NASA Astrophysics Data System (ADS)
Viola, Giulio
2017-04-01
Faulting accommodates momentous deformation and its style reflects the complex interplay of often transient processes such as friction, fluid flow and rheological changes within generally dilatant systems. Brittle faults are thus unique archives of the stress state and the physical and chemical conditions at the time of both initial strain localization and subsequent slip(s) during structural reactivation. Opening those archives, however, may be challenging due to the commonly convoluted (if not even chaotic) nature of brittle fault architectures and fault rocks. This is because, once formed, faults are extremely sensitive to variations in stress field and environmental conditions and are prone to readily slip in a variety of conditions, also in regions affected by only weak, far-field stresses. The detailed, multi-scalar structural analysis of faults and of fault rocks has to be the starting point for any study aiming at reconstructing the complex framework of brittle deformation. However, considering that present-day exposures of faults only represent the end result of the faults' often protracted and heterogeneous histories, the obtained structural and mechanical results have to be integrated over the life span of the studied fault system. Dating of synkinematic illite/muscovite to constrain the time-integrated evolution of faults is therefore the natural addition to detailed structural studies. By means of selected examples it will be demonstrated how careful structural analysis integrated with illite characterization and K-Ar dating allows the high-resolution reconstruction of brittle deformation histories and, in turn, multiple constraints to be placed on strain localization, deformation mechanisms, fluid flow, mineral alteration and authigenesis within actively deforming brittle fault rocks. Complex and long brittle histories can thus be reconstructed and untangled in any tectonic setting.
NASA Astrophysics Data System (ADS)
Vadman, M.; Bemis, S. P.
2017-12-01
Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.
Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone
NASA Astrophysics Data System (ADS)
Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.
2011-01-01
Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of normal and oblique faults, with extension rates increasing northward on the Fish Lake Valley fault. Where faults of the Eastern California Shear Zone terminate northward into the Mina Deflection, extension rates increase northward along the Sierra Nevada frontal fault zone to ~ 0.7 mm year - 1 in northern Mono Basin. This spatial pattern suggests that extension is transferred from more easterly fault systems, e.g., Fish Lake Valley fault, and localized on the Sierra Nevada frontal fault zone as the Eastern California Shear Zone-Walker Lane belt faulting is transferred through the Mina Deflection.
Real-Time Fault Classification for Plasma Processes
Yang, Ryan; Chen, Rongshun
2011-01-01
Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001
A Game Theoretic Fault Detection Filter
NASA Technical Reports Server (NTRS)
Chung, Walter H.; Speyer, Jason L.
1995-01-01
The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.
Ste. Genevieve Fault Zone, Missouri and Illinois. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.J.; Lumm, D.K.
1985-07-01
The Ste. Genevieve Fault Zone is a major structural feature which strikes NW-SE for about 190 km on the NE flank of the Ozark Dome. There is up to 900 m of vertical displacement on high angle normal and reverse faults in the fault zone. At both ends the Ste. Genevieve Fault Zone dies out into a monocline. Two periods of faulting occurred. The first was in late Middle Devonian time and the second from latest Mississippian through early Pennsylvanian time, with possible minor post-Pennsylvanian movement. No evidence was found to support the hypothesis that the Ste. Genevieve Fault Zonemore » is part of a northwestward extension of the late Precambrian-early Cambrian Reelfoot Rift. The magnetic and gravity anomalies cited in support of the ''St. Louis arm'' of the Reelfoot Rift possible reflect deep crystal features underlying and older than the volcanic terrain of the St. Francois Mountains (1.2 to 1.5 billion years old). In regard to neotectonics no displacements of Quaternary sediments have been detected, but small earthquakes occur from time to time along the Ste. Genevieve Fault Zone. Many faults in the zone appear capable of slipping under the current stress regime of east-northeast to west-southwest horizontal compression. We conclude that the zone may continue to experience small earth movements, but catastrophic quakes similar to those at New Madrid in 1811-12 are unlikely. 32 figs., 1 tab.« less
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
Software fault tolerance for real-time avionics systems
NASA Technical Reports Server (NTRS)
Anderson, T.; Knight, J. C.
1983-01-01
Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.
NASA Astrophysics Data System (ADS)
Ushaq, Muhammad; Fang, Jiancheng
2013-10-01
Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be applied to the navigation system of aircraft or unmanned aerial vehicle (UAV).
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-04-28
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-01-01
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925
Spacelab Life Sciences-1 electrical diagnostic expert system
NASA Technical Reports Server (NTRS)
Kao, C. Y.; Morris, W. S.
1989-01-01
The Spacelab Life Sciences-1 (SLS-1) Electrical Diagnostic (SLED) expert system is a continuous, real time knowledge-based system to monitor and diagnose electrical system problems in the Spacelab. After fault isolation, the SLED system provides corrective procedures and advice to the ground-based console operator. The SLED system updates its knowledge about the status of Spacelab every 3 seconds. The system supports multiprocessing of malfunctions and allows multiple failures to be handled simultaneously. Information which is readily available via a mouse click includes: general information about the system and each component, the electrical schematics, the recovery procedures of each malfunction, and an explanation of the diagnosis.
On the design of fault-tolerant robotic manipulator systems
NASA Technical Reports Server (NTRS)
Tesar, Delbert
1993-01-01
Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.
Time and Space Partitioning the EagleEye Reference Misson
NASA Astrophysics Data System (ADS)
Bos, Victor; Mendham, Peter; Kauppinen, Panu; Holsti, Niklas; Crespo, Alfons; Masmano, Miguel; de la Puente, Juan A.; Zamorano, Juan
2013-08-01
We discuss experiences gained by porting a Software Validation Facility (SVF) and a satellite Central Software (CSW) to a platform with support for Time and Space Partitioning (TSP). The SVF and CSW are part of the EagleEye Reference mission of the European Space Agency (ESA). As a reference mission, EagleEye is a perfect candidate to evaluate practical aspects of developing satellite CSW for and on TSP platforms. The specific TSP platform we used consists of a simulated LEON3 CPU controlled by the XtratuM separation micro-kernel. On top of this, we run five separate partitions. Each partition runs its own real-time operating system or Ada run-time kernel, which in turn are running the application software of the CSW. We describe issues related to partitioning; inter-partition communication; scheduling; I/O; and fault-detection, isolation, and recovery (FDIR).
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
NASA Astrophysics Data System (ADS)
Lin, Jinshan; Chen, Qian
2013-07-01
Vibration data of faulty rolling bearings are usually nonstationary and nonlinear, and contain fairly weak fault features. As a result, feature extraction of rolling bearing fault data is always an intractable problem and has attracted considerable attention for a long time. This paper introduces multifractal detrended fluctuation analysis (MF-DFA) to analyze bearing vibration data and proposes a novel method for fault diagnosis of rolling bearings based on MF-DFA and Mahalanobis distance criterion (MDC). MF-DFA, an extension of monofractal DFA, is a powerful tool for uncovering the nonlinear dynamical characteristics buried in nonstationary time series and can capture minor changes of complex system conditions. To begin with, by MF-DFA, multifractality of bearing fault data was quantified with the generalized Hurst exponent, the scaling exponent and the multifractal spectrum. Consequently, controlled by essentially different dynamical mechanisms, the multifractality of four heterogeneous bearing fault data is significantly different; by contrast, controlled by slightly different dynamical mechanisms, the multifractality of homogeneous bearing fault data with different fault diameters is significantly or slightly different depending on different types of bearing faults. Therefore, the multifractal spectrum, as a set of parameters describing multifractality of time series, can be employed to characterize different types and severity of bearing faults. Subsequently, five characteristic parameters sensitive to changes of bearing fault conditions were extracted from the multifractal spectrum and utilized to construct fault features of bearing fault data. Moreover, Hilbert transform based envelope analysis, empirical mode decomposition (EMD) and wavelet transform (WT) were utilized to study the same bearing fault data. Also, the kurtosis and the peak levels of the EMD or the WT component corresponding to the bearing tones in the frequency domain were carefully checked and used as the bearing fault features. Next, MDC was used to classify the bearing fault features extracted by EMD, WT and MF-DFA in the time domain and assess the abilities of the three methods to extract fault features from bearing fault data. The results show that MF-DFA seems to outperform each of envelope analysis, statistical parameters, EMD and WT in feature extraction of bearing fault data and then the proposed method in this paper delivers satisfactory performances in distinguishing different types and severity of bearing faults. Furthermore, to further ascertain the nature causing the multifractality of bearing vibration data, the generalized Hurst exponents of the original bearing vibration data were compared with those of the shuffled and the surrogated data. Consequently, the long-range correlations for small and large fluctuations of data seem to be chiefly responsible for the multifractality of bearing vibration data.
NASA Astrophysics Data System (ADS)
Roberts, Gerald P.; Ganas, Athanassios
2000-10-01
Fault-slip directions recorded by outcropping striated and corrugated fault planes in central and southern Greece have been measured for comparison with extension directions derived from focal mechanism and Global Positioning System (GPS) data for the last ˜100 years to test how far back in time velocity fields and deformation dynamics derived from the latter data sets can be extrapolated. The fault-slip data have been collected from the basin-bounding faults to Plio-Pleistocene to recent extensional basins and include data from arrays of footwall faults formed during the early stages of fault growth. We show that the orientation of the inferred stress field varies along faults and earthquake ruptures, so we use only slip-directions from the centers of faults, where dip-slip motion occurs, to constrain regionally significant extension directions. The fault-slip directions for the Peloponnese and Gulfs of Evia and Corinth are statistically different at the 99% confidence level but statistically the same as those implied by earthquake focal mechanisms for each region at the 99% confidence level; they are also qualitatively similar to the principal strain axes derived from GPS studies. Extension directions derived from fault-slip data are 043-047° for the southern Peloponnese, 353° for the Gulf of Corinth, and 015-014° for the Gulf of Evia. Extension on active normal faults in the two latter areas appears to grade into strike-slip along the North Anatolian Fault through a gradual change in fault-slip directions and fault strikes. To reconcile the above with 5° Myr-1 clockwise rotations suggested for the area, we suggest that the faults considered formed during a single phase of extension. The deformation and formation of the normal fault systems examined must have been sufficiently rapid and recent for rotations about vertical axes to have been unable to disperse the fault-slip directions from the extension directions implied by focal mechanisms and GPS data. Thus, in central and southern Greece the velocity fields derived from focal mechanism and GPS data may help explain the dynamics of the deformation over longer time periods than the ˜100 years over which they were measured; this may include the entire deformation history of the fault systems considered, a time period that may exceed 1-2 Myr.
Geometric-kinematic characteristics of the main faults in the W-SW of the Lut Block (SE Iran)
NASA Astrophysics Data System (ADS)
Rashidi Boshrabadi, Ahmad; Khatib, Mohamad Mahdi; Raeesi, Mohamad; Mousavi, Seyed Morteza; Djamour, Yahya
2018-03-01
The area to the W-SW of the Lut Block in Iran has experienced numerous historical and recent destructive earthquakes. We examined a number of faults in this area that have high potential for generating destructive earthquakes. In this study a number of faults are introduced and named for the first time. These new faults are Takdar, Dehno, Suru, Hojat Abad, North Faryab, North Kahnoj, Heydarabad, Khatun Abad and South Faryab. For a group of previously known faults, their mechanism and geological offsets are investigated for the first time. This group of faults include East Nayband, West Nayband, Sardueiyeh, Dalfard, Khordum, South Jabal-e-Barez, and North Jabal-e-Barez. The N-S fault systems of Sabzevaran, Gowk, and Nayband induce slip on the E-W, NE-SW and NW-SE fault systems. The faulting patterns appear to preserve different stages of fault development. We investigated the distribution of active faults and the role that they play in accommodating tectonic strain in the SW-Lut. In the study area, the fault systems with en-echelon arrangement create structures such as restraining and releasing stepover, fault bend and pullapart basin. The main mechanism for fault growth in the region seems to be 'segment linkage of preexisting weaknesses' and also for a limited area through 'process zone'. Estimations are made for the likely magnitudes of separate or combined failure of the fault segments. Such magnitudes are used in hazard analysis of the region.
Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems
NASA Technical Reports Server (NTRS)
Innocenti, M.; Napolitano, M.
2003-01-01
Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.
Evaluating the Effect of Integrated System Health Management on Mission Effectiveness
2013-03-01
Health Status, Fault Detection , IMS Commands «Needline» 110 B.6 OV-5a « O V -5 » a c t O V -5 [ O V -5 a...UAS to self- detect , isolate, and diagnose system health problems. Current flight avionics architectures may include lower level sub-system health ... monitoring or may isolate health monitoring functions to a black box configuration, but a vehicle-wide health monitoring information system has
Gaining Insight Into Femtosecond-scale CMOS Effects using FPGAs
2015-03-24
paths or detecting gross path delay faults , but for characterizing subtle aging effects, there is a need to isolate very short paths and detect very...data using COTS FPGAs and novel self-test. Hardware experiments using a 28 nm FPGA demonstrate isolation of small sets of transistors, detection of...hold the static configuration data specifying the LUT function. A set of inverters drive the SRAM contents into a pass-gate multiplexor tree; we
Constraining slip rates and spacings for active normal faults
NASA Astrophysics Data System (ADS)
Cowie, Patience A.; Roberts, Gerald P.
2001-12-01
Numerous observations of extensional provinces indicate that neighbouring faults commonly slip at different rates and, moreover, may be active over different time intervals. These published observations include variations in slip rate measured along-strike of a fault array or fault zone, as well as significant across-strike differences in the timing and rates of movement on faults that have a similar orientation with respect to the regional stress field. Here we review published examples from the western USA, the North Sea, and central Greece, and present new data from the Italian Apennines that support the idea that such variations are systematic and thus to some extent predictable. The basis for the prediction is that: (1) the way in which a fault grows is fundamentally controlled by the ratio of maximum displacement to length, and (2) the regional strain rate must remain approximately constant through time. We show how data on fault lengths and displacements can be used to model the observed patterns of long-term slip rate where measured values are sparse. Specifically, we estimate the magnitude of spatial variation in slip rate along-strike and relate it to the across-strike spacing between active faults.
Distributed processing of a GPS receiver network for a regional ionosphere map
NASA Astrophysics Data System (ADS)
Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun
2018-01-01
This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Fattaruso, L.; Dorsey, R. J.; Housen, B. A.
2015-12-01
Between ~1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that growth of the San Jacinto fault led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of off-fault deformation and potential incipient faulting. These patterns support the notion of north-to-south propagation of the San Jacinto fault during its initiation. The results of the present-day model are compared with microseismicity focal mechanisms to provide additional insight into the patterns of off-fault deformation within the southern San Andreas fault system.
The role of bed-parallel slip in the development of complex normal fault zones
NASA Astrophysics Data System (ADS)
Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros
2017-04-01
Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.
NASA Astrophysics Data System (ADS)
Clark, Dan; McPherson, Andrew
2017-04-01
Continental intraplate Australia can be divided according to crustal type in terms of seismogenic potential and fault characteristics. Three 'superdomains' are recognized, representing cratonic, non-cratonic and extended crust. In the Australian context, cratonic crust is Archaean to Proterozoic in age and has not been significantly tectonically reactivated during the Phanerozoic Eon. Non-cratonic crust includes Phanerozoic accretionary terranes and older crust significantly deformed during Phanerozoic tectonic events. Extended crust includes any crustal type that has been significantly extended during the Mesozoic and Cenozoic, and often to a lesser degree in the Paleozoic. Aulacogens and passive margins fit into this category. Cratonic crust is characterized by the thickest lithosphere and has the lowest seismogenic potential, despite all eight documented historic surface ruptures in Australia having occurred within this category. Little strain accumulation is observed on individual faults and isolated single-rupture scarps are common. Where recurrence has been demonstrated, average slip rates of only a few metres per million years are indicated. In contrast, extended crust is associated with thinner lithosphere, better connection between faults, and strain localization on faults which can result in regional relief-building. The most active faults have accumulated several hundred metres of slip under the current crustal stress regime at rates of several tens of metres per million years. Non-cratonic crust is typically intermediate in lithospheric thickness and seismogenic character. The more active faults have accumulated tens to a couple of hundreds of metres of slip, at rates of a few to a few tens of metres per million years. Across all superdomains paleoseismological data suggest that the largest credible earthquakes are likely to exceed those experienced in historic times. In general, the concept of large earthquake recurrence might only be meaningful in relation to individual faults in non-cratonic and extended superdomains. However, large earthquake recurrence and slip are demonstrably not evenly distributed in time. Within the limitations of the sparse paleoseismological data, temporal clustering of large events appears to be a common (perhaps ubiquitous?) characteristic. Over the last few decades, permanent and campaign GPS studies have failed to detect a tectonic deformation signal from which a strain budget could be calculated. Recent studies have used these observations, amongst others, to propose an orders of magnitude difference in the timescales of strain accumulation and seismogenic strain release in intraplate environments - i.e. clusters of large events deplete long-lived pools of lithospheric strain. The recognition of a relationship between crustal type/lithospheric thickness and seismogenic potential in Australia provides a framework for assessing whether ergodic substitution (i.e. global analogue studies) might be warranted as a tool to better understand intraplate seismicity worldwide. Further research is required to assess how variation in crustal stress regime may influence faulting characteristics within different superdomains.
NASA Astrophysics Data System (ADS)
Kovacs, A.; Gorman, A. R.; Lay, V.; Buske, S.
2013-12-01
Paleoseismic evidence from the vicinity of the plate-bounding Alpine Fault on New Zealand's South Island suggests that earthquakes of magnitude 7.9 occur every 200-400 years, with the last earthquake occurring in AD 1717. No human observations of this event are recorded. Therefore, the Deep Fault Drilling Project 2 (DFDP-2) drill hole, which is planned for 2014 on the hanging wall of the Alpine Fault in the Whataroa Valley, provides a critical opportunity to study the behavior of this transpressive plate boundary late in its seismogenic cycle. New seismic and gravity data collected since 2011 have been analyzed to assist with the positioning of the drill hole in this glacial valley that provides rare low-elevation access to the hanging wall of the Alpine Fault. The WhataDUSIE controlled-source seismic project, led by researchers from the University of Otago (New Zealand), TU Bergakademie Freiberg (Germany) and the University of Alberta (Canada), provided relatively high-resolution coverage (4-8 m geophone spacing, 25-100 m shot spacing) along a 5-km-long profile across the Alpine Fault in the Whataroa Valley. This work has been supplemented by focused hammer-seismic studies and gravity data collection in the valley. The former targets surface layer properties, whereas the latter targets the depth to the base of the glacially carved paleovalley. In positioning DFDP-2, an understanding of the nature of overburden and valley-fill sediments is critical for drilling design. A velocity model has been developed for the valley based on refraction analysis of the WhataDUSIE and hammer-seismic data combined with a ray-theoretical travel-time tomography (RAYINVR) image of the shallow (uppermost 1 km or so) part of the hanging wall of the Alpine Fault. The model shows that the glacial valley, which presumably was last scoured by ice at the Last Glacial Maximum, has been filled with 200-350 m of post-glacial sediments and outwash gravels. The hanging-wall rocks into which the valley was cut are presumed to be mylonitized Alpine Schist. Considering uplift rates of 6-10 mm/a on the hanging wall of the fault and a glacial withdrawal date of 10,000 years before present (i.e., 60-100 m of uplift since the ice vacated the valley), the floor of the valley would have been as deep as about 350 m below sea level at the time that the ice withdrew (given the current elevation of ~100 m on the valley floor). Basal sediments in the valley could therefore be either marine (if the valley was open to the ocean) or lacustrine (if the valley was isolated from the open ocean by elevated footwall rocks along the west coast of the South Island). Once the original water body in the valley was filled, sediments would accumulate as outwash gravels above sea level.
Seismic variability and structural controls on fluid migration in Northern Oklahoma
NASA Astrophysics Data System (ADS)
Lambert, C.; Keranen, K. M.; Stevens, N. T.
2016-12-01
The broad region of seismicity in northern Oklahoma encompasses distinct structural settings; notably, the area contains both high-length, high-offset faults bounding a major structural uplift (the Nemaha uplift), and also encompasses regions of distributed, low-length, low-offset faults on either side of the uplift. Seismicity differs between these structural settings in mode of migration, rate, magnitude, and mechanism. Here we use our catalog from 2015-2016, acquired using a dense network of 55 temporary broadband seismometers, complemented by data from 40+ regional stations, including the IRIS Wavefields stations. We compare seismicity between these structural settings using precise earthquake locations, focal mechanism solutions, and body-wave tomography. Within and along the dominant Nemaha uplift, earthquakes rarely occur on one of the primary uplift-bounding faults. Earthquakes instead occur within the uplift on isolated, discrete faults, and migrate gradually along these faults at 20-30 m/day. The regions peripheral to the uplift hosted the majority of earthquakes within the year, on multiple series of frequently unmapped, densely-spaced, subparallel faults. We did not detect a similar slow migration along these faults. Earthquakes instead occurred via progressive failure of individual segments along a fault, or jumped abruptly from one fault to another nearby. Mechanisms in both regions are dominantly strike-slip, with the interpreted dominant fault plane orientation rotating from N100E in the Wavefields area (west of the uplift) to N50E (within the uplift). We interpret that the distinct variation in seismicity may result from the variation in fault density and length between the uplift and the surrounding regions. Seismic velocity within the upper basement of the uplift is lower than the velocity on either side, possibly indicative of enhanced fracturing within the uplift, as seen in the Nemaha uplift to the north. The fracturing, along with the large faults, may create fluid pathways that facilitate pressure diffusion. Conversely, outside of the uplift, the numerous small-offset faults that are reactivated appear to be less efficient fluid pathways, inhibiting pressure diffusion and resulting in a higher seismicity rate.
NASA Astrophysics Data System (ADS)
Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang
2016-09-01
For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.
Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.; Osipov, Vyatcheslav V.; Timucin, Dogan A.; Uckun, Serdar
2009-01-01
We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed.
Post-rift deformation of the Red Sea Arabian margin
NASA Astrophysics Data System (ADS)
Zanoni, Davide; Schettino, Antonio; Pierantoni, Pietro Paolo; Rasul, Najeeb
2017-04-01
Starting from the Oligocene, the Red Sea rift nucleated within the composite Neoproterozoic Arabian-Nubian shield. After about 30 Ma-long history of continental lithosphere thinning and magmatism, the first pulse of oceanic spreading occurred at around 4.6 Ma at the triple junction of Africa, Arabia, and Danakil plate boundaries and propagated southward separating Danakil and Arabia plates. Ocean floor spreading between Arabia and Africa started later, at about 3 Ma and propagated northward (Schettino et al., 2016). Nowadays the northern part of the Red Sea is characterised by isolated oceanic deeps or a thinned continental lithosphere. Here we investigate the deformation of thinned continental margins that develops as a consequence of the continental lithosphere break-up induced by the progressive oceanisation. This deformation consists of a system of transcurrent and reverse faults that accommodate the anelastic relaxation of the extended margins. Inversion and shortening tectonics along the rifted margins as a consequence of the formation of a new segment of ocean ridge was already documented in the Atlantic margin of North America (e.g. Schlische et al. 2003). We present preliminary structural data obtained along the north-central portion of the Arabian rifted margin of the Red Sea. We explored NE-SW trending lineaments within the Arabian margin that are the inland continuation of transform boundaries between segments of the oceanic ridge. We found brittle fault zones whose kinematics is consistent with a post-rift inversion. Along the southernmost transcurrent fault (Ad Damm fault) of the central portion of the Red Sea we found evidence of dextral movement. Along the northernmost transcurrent fault, which intersects the Harrat Lunayyir, structures indicate dextral movement. At the inland termination of this fault the evidence of dextral movement are weaker and NW-SE trending reverse faults outcrop. Between these two faults we found other dextral transcurrent systems that locally are associated with metre-thick reverse fault zones. Along the analysed faults there is evidence of tectonic reworking. Relict kinematic indicators or the sense of asymmetry of sigmoidal Miocene dykes may suggest that a former sinistral movement was locally accommodated by these faults. This evidence of inversion of strike-slip movement associated with reverse structures, mostly found at the inland endings of these lineaments, suggests an inversion tectonics that could be related to the progressive and recent oceanisation of rift segments. Schettino A., Macchiavelli C., Pierantoni P.P., Zanoni D. & Rasul N. 2016. Recent kinematics of the tectonic plates surrounding the Red Sea and Gulf of Aden. Geophysical Journal International, 207, 457-480. Schlische R.W., Withjack M.O. & Olsen P.E., 2003. Relative timing of CAMP, rifting, continental breakup, and basin inversion: tectonic significance, in The Central Atlantic Magmatic Province: Insights from Fragments of Pangea, eds Hames W., Mchone J.G., Renne P. & Ruppel C., American Geophysical Union, 33-59.
NASA Astrophysics Data System (ADS)
Fattaruso, Laura A.; Cooke, Michele L.; Dorsey, Rebecca J.; Housen, Bernard A.
2016-12-01
Between 1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault zone, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that initiation and growth of the San Jacinto fault zone led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical-axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the modeled fault evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of incipient faulting, and support the notion of north-to-south propagation of the San Jacinto fault during its initiation.
Wetland losses related to fault movement and hydrocarbon production, southeastern Texas coast
White, William A.; Morton, Robert A.
1997-01-01
Time series analyses of surface fault activity and nearby hydrocarbon production from the southeastern Texas coast show a high correlation among volume of produced fluids, timing of fault activation, rates of subsidence, and rates of wetland loss. Greater subsidence on the downthrown sides of faults contributes to more frequent flooding and generally wetter conditions, which are commonly reflected by changes in plant communities {e.g., Spartina patens to Spartina alterniflora) or progressive transformation of emergent vegetation to open water. Since the 1930s and 1950s, approximately 5,000 hectares of marsh habitat has been lost as a result of subsidence associated with faulting. Marsh- es have expanded locally along faults where hydrophytic vegetation has spread into former upland areas. Fault traces are linear to curvilinear and are visible because elevation differences across faults alter soil hydrology and vegetation. Fault lengths range from 1 to 13.4 km and average 3.8 km. Seventy-five percent of the faults visible on recent aerial photographs are not visible on photographs taken in the 1930's, indicating relatively recent fault movement. At least 80% of the surface faults correlate with extrapolated subsurface faults; the correlation increases to more than 90% when certain assumptions are made to compensate for mismatches in direction of displacement. Coastal wetlands loss in Texas associated with hydrocarbon extraction will likely increase where production in mature fields is prolonged without fiuid reinjection.
Dipping San Andreas and Hayward faults revealed beneath San Francisco Bay, California
Parsons, T.; Hart, P.E.
1999-01-01
The San Francisco Bay area is crossed by several right-lateral strike-slip faults of the San Andreas fault zone. Fault-plane reflections reveal that two of these faults, the San Andreas and Hayward, dip toward each other below seismogenic depths at 60?? and 70??, respectively, and persist to the base of the crust. Previously, a horizontal detachment linking the two faults in the lower crust beneath San Francisco Bay was proposed. The only near-vertical-incidence reflection data available prior to the most recent experiment in 1997 were recorded parallel to the major fault structures. When the new reflection data recorded orthogonal to the faults are compared with the older data, the highest, amplitude reflections show clear variations in moveout with recording azimuth. In addition, reflection times consistently increase with distance from the faults. If the reflectors were horizontal, reflection moveout would be independent of azimuth, and reflection times would be independent of distance from the faults. The best-fit solution from three-dimensional traveltime modeling is a pair of high-angle dipping surfaces. The close correspondence of these dipping structures with the San Andreas and Hayward faults leads us to conclude that they are the faults beneath seismogenic depths. If the faults retain their observed dips, they would converge into a single zone in the upper mantle -45 km beneath the surface, although we can only observe them in the crust.
Timing of activity of two fault systems on Mercury
NASA Astrophysics Data System (ADS)
Galluzzi, V.; Guzzetta, L.; Giacomini, L.; Ferranti, L.; Massironi, M.; Palumbo, P.
2015-10-01
Here we discuss about two fault systems found in the Victoria and Shakespeare quadrangles of Mercury. The two fault sets intersect each other and show probable evidence for two stages of deformation. The most prominent system is N-S oriented and encompasses several tens to hundreds of kilometers long and easily recognizable fault segments. The other system strikes NE- SW and encompasses mostly degraded and short fault segments. The structural framework of the studied area and the morphological appearance of the faults suggest that the second system is older than the first one. We intend to apply the buffered crater counting technique on both systems to make a quantitative study of their timing of activity that could confirm the already clear morphological evidence.
NASA Astrophysics Data System (ADS)
Madden, E. H.; McBeck, J.; Cooke, M. L.
2013-12-01
Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in efficiency provided by both hard-linkage and soft-linkage to be quantified and compared. Specialized models of interactions over the past 1 Ma between the Clark and Coyote Creek faults within the San Jacinto system reveal increasing mechanical efficiency as these fault structures change over time. Alongside this increasing efficiency is an increasing likelihood for single, larger earthquakes that rupture multiple fault segments. These models reinforce the sensitivity of mechanical efficiency to both fault structure and the regional tectonic stress orientation controlled by plate motions and provide insight into how slip may have been partitioned between the San Andreas and San Jacinto systems over the past 1 Ma.
Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.
1995-06-01
DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth
GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Blewitt, G.
2016-12-01
The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.
Intelligent fault management for the Space Station active thermal control system
NASA Technical Reports Server (NTRS)
Hill, Tim; Faltisco, Robert M.
1992-01-01
The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.
NASA Astrophysics Data System (ADS)
Gregory, Laura; Roberts, Gerald; Cowie, Patience; Wedmore, Luke; McCaffrey, Ken; Shanks, Richard; Zijerveld, Leo; Phillips, Richard
2017-04-01
In zones of distributed continental faulting, it is critical to understand how slip is partitioned onto brittle structures over both long-term millennial time scales and shorter-term individual earthquake cycles. Measuring earthquake slip histories on different timescales is challenging due to earthquake repeat-times being longer or similar to historical earthquake records, and a paucity of data on fault activity covering millennial to Quaternary scales in detail. Cosmogenic isotope analyses from bedrock fault scarps have the potential to bridge the gap, as these datasets track the exposure of fault planes due to earthquakes with millennial resolution. In this presentation, we present new 36Cl data combined with historical earthquake records to document orogen-wide changes in the distribution of seismicity on millennial timescales in Abruzzo, central Italy. Seismic activity due to extensional faulting was concentrated on the northwest side of the mountain range during the historical period, or since approximately the 14th century. Seismicity is more limited on the southwest side of Abruzzo during historical times. This pattern has led some to suggest that faults on the southwest side of Abruzzo are not active, however clear fault scarps cutting Holocene-aged slopes are well preserved across the whole of the orogen. These scarps preserve an excellent record of Late Pleistocene to Holocene earthquake activity, which can be quantified using cosmogenic isotopes that track the exposure of the bedrock fault scarps. 36Cl accumulates in the fault scarps as the plane is progressively exhumed by earthquakes and the concentration of 36Cl measured up the fault plane reflects the rate and patterns of slip. We utilise Bayesian modelling techniques to estimate slip histories based on the cosmogenic data. Each sampling site is carefully characterised using LiDAR and GPR to ensure that fault plane exposure is due to slip during earthquakes and not sediment transport processes. In this presentation we will focus on new data from faults located across-strike in Abruzzo. Many faults in Abruzzo demonstrate slip rate variability on millennial timescales, with relatively fast slip interspersed between quiescent periods. We show that heightened activity is co-located and spatially migrates across Abruzzo over time. We highlight the importance of understanding this dynamic fault behaviour of migrating seismic activity, and in particular how our research is relevant to the 2016 Amatrice-Vettore seismic sequence in central Italy.
NASA Astrophysics Data System (ADS)
Zhai, Ding; Lu, Anyang; Li, Jinghao; Zhang, Qingling
2016-10-01
This paper deals with the problem of the fault detection (FD) for continuous-time singular switched linear systems with multiple time-varying delay. In this paper, the actuator fault is considered. Besides, the systems faults and unknown disturbances are assumed in known frequency domains. Some finite frequency performance indices are initially introduced to design the switched FD filters which ensure that the filtering augmented systems under switching signal with average dwell time are exponentially admissible and guarantee the fault input sensitivity and disturbance robustness. By developing generalised Kalman-Yakubovic-Popov lemma and using Parseval's theorem and Fourier transform, finite frequency delay-dependent sufficient conditions for the existence of such a filter which can guarantee the finite-frequency H- and H∞ performance are derived and formulated in terms of linear matrix inequalities. Four examples are provided to illustrate the effectiveness of the proposed finite frequency method.
Fault detection of gearbox using time-frequency method
NASA Astrophysics Data System (ADS)
Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.
2017-04-01
This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).
78 FR 49736 - Notice of Intent To Grant a Partially Exclusive License; Ridgetop Group, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... States to practice for all fields of use the Government- Owned invention described in U.S. Patent No. 7,626,398: System for Isolating Faults Between Electrical Equipment, Navy Case Number 97027, inventors...
NASA Astrophysics Data System (ADS)
Homberg, C.; Bergerat, F.; Angelier, J.; Garcia, S.
2010-02-01
Transform motion along oceanic transforms generally occurs along narrow faults zones. Another class of oceanic transforms exists where the plate boundary is quite large (˜100 km) and includes several subparallel faults. Using a 2-D numerical modeling, we simulate the slip distribution and the crustal stress field geometry within such broad oceanic transforms (BOTs). We examine the possible configurations and evolution of such BOTs, where the plate boundary includes one, two, or three faults. Our experiments show that at any time during the development of the plate boundary, the plate motion is not distributed along each of the plate boundary faults but mainly occurs along a single master fault. The finite width of a BOT results from slip transfer through time with locking of early faults, not from a permanent distribution of deformation over a wide area. Because of fault interaction, the stress field geometry within the BOTs is more complex than that along classical oceanic transforms and includes stress deflections close to but also away from the major faults. Application of this modeling to the 100 km wide Tjörnes Fracture Zone (TFZ) in North Iceland, a major BOT of the Mid-Atlantic Ridge that includes three main faults, suggests that the Dalvik Fault and the Husavik-Flatey Fault developed first, the Grismsey Fault being the latest active structure. Since initiation of the TFZ, the Husavik-Flatey Fault accommodated most of the plate motion and probably persists until now as the main plate structure.
Development of Hydrologic Characterization Technology of Fault Zones (in Japanese; English)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karasaki, Kenzi; Onishi, Tiemi; Wu, Yu-Shu
2008-03-31
Through an extensive literature survey we find that there is very limited amount of work on fault zone hydrology, particularly in the field using borehole testing. The common elements of a fault include a core, and damage zones. The core usually acts as a barrier to the flow across it, whereas the damage zone controls the flow either parallel to the strike or dip of a fault. In most of cases the damage zone isthe one that is controlling the flow in the fault zone and the surroundings. The permeability of damage zone is in the range of two tomore » three orders of magnitude higher than the protolith. The fault core can have permeability up to seven orders of magnitude lower than the damage zone. The fault types (normal, reverse, and strike-slip) by themselves do not appear to be a clear classifier of the hydrology of fault zones. However, there still remains a possibility that other additional geologic attributes and scaling relationships can be used to predict or bracket the range of hydrologic behavior of fault zones. AMT (Audio frequency Magneto Telluric) and seismic reflection techniques are often used to locate faults. Geochemical signatures and temperature distributions are often used to identify flow domains and/or directions. ALSM (Airborne Laser Swath Mapping) or LIDAR (Light Detection and Ranging) method may prove to be a powerful tool for identifying lineaments in place of the traditional photogrammetry. Nonetheless not much work has been done to characterize the hydrologic properties of faults by directly testing them using pump tests. There are some uncertainties involved in analyzing pressure transients of pump tests: both low permeability and high permeability faults exhibit similar pressure responses. A physically based conceptual and numerical model is presented for simulating fluid and heat flow and solute transport through fractured fault zones using a multiple-continuum medium approach. Data from the Horonobe URL site are analyzed to demonstrate the proposed approach and to examine the flow direction and magnitude on both sides of a suspected fault. We describe a strategy for effective characterization of fault zone hydrology. We recommend conducting a long term pump test followed by a long term buildup test. We do not recommend isolating the borehole into too many intervals. We do recommend ensuring durability and redundancy for long term monitoring.« less
NASA Astrophysics Data System (ADS)
Meschis, M.; Roberts, G.; Robertson, J.; Houghton, S.; Briant, R. M.
2017-12-01
Whether slip-rates on active faults accumulated over multiple seismic events is constant or varying over tens to hundreds of millenia timescales is an open question that can be addressed through study of deformed Quaternary palaeoshorelines. It is important to know the answer so that one can judge whether shorter timescale measurements (e.g. Holocene palaeoseismology or decadal geodesy) are suitable for determining earthquake recurrence intervals for Probabilistic Seismic Hazard Assessment or more suitable for studying temporal earthquake clustering. We present results from the Vibo Fault and the Capo D'Orlando Fault, that lie within the deforming Calabrian Arc, which has experienced damaging seismic events such as the 1908 Messina Strait earthquake ( Mw 7) and the 1905 Capo Vaticano earthquake ( Mw 7). These normal faults deform uplifted Late Quaternary palaeoshorelines, which outcrop mainly within their hangingwalls, but also partially in their footwalls, showing that a regional subduction and mantle-related uplift outpaces local fault-related subsidence. Through (1) field and DEM-based mapping of palaeoshorelines, both up flights of successively higher, older inner edges, and along the strike of the faults, and (2) utilisation of synchronous correlation of non-uniformly-spaced inner edge elevations with non-uniformly spaced sea-level highstand ages, we show that slip-rates decrease towards fault tips and that slip-rates have remained constant since 340 ka (given the time resolution we obtain). The slip-rates for the Capo D'Orlando Fault and Vibo Fault are 0.61mm/yr and 1mm/yr respectively. We show that the along-strike gradients in slip-rate towards fault tips differ for the two faults hinting at fault interaction and also discuss this in terms of other regions of extension like the Gulf of Corinth, Greece, where slip-rate has been shown to change through time through the Quaternary. We make the point that slip-rates may change through time as fault systems grow and fault interaction changes due to geometrical effects.
Analysis of Even Harmonics Generation in an Isolated Electric Power System
NASA Astrophysics Data System (ADS)
Kanao, Norikazu; Hayashi, Yasuhiro; Matsuki, Junya
Harmonics bred from loads are mainly odd order because the current waveform has half-wave symmetry. Since the even harmonics are negligibly small, those are not generally measured in electric power systems. However, even harmonics were measured at a 500/275/154kV substation in Hokuriku Electric Power Company after removal of a transmission line fault. The even harmonics caused malfunctions of protective digital relays because the relays used 4th harmonics at the input filter as automatic supervisory signal. This paper describes the mechanism of generation of the even harmonics by comparing measured waveforms with ATP-EMTP simulation results. As a result of analysis, it is cleared that even harmonics are generated by three causes. The first cause is a magnetizing current of transformers due to flux deviation by DC component of a fault current. The second one is due to harmonic conversion of a synchronous machine which generates even harmonics when direct current component or even harmonic current flow into the machine. The third one is that increase of harmonic impedance due to an isolated power system produces harmonic voltages. The design of the input filter of protective digital relays should consider even harmonics generation in an isolated power system.
Vibration signal models for fault diagnosis of planet bearings
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.
2016-05-01
Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.
Satellite control of electric power distribution
NASA Technical Reports Server (NTRS)
Bergen, L.
1981-01-01
An L-band frequencies satellite link providing the medium for direct control of electrical loads at individual customer sites from remote central locations is described. All loads supplied under interruptible-service contracts are likely condidates for such control, and they can be cycled or switched off to reduce system loads. For every kW of load eliminated or deferred to off-peak hours, the power company reduces its need for additional generating capacity. In addition, the satellite could switch meter registers so that their readings automatically reflected the time of consumption. The system would perform load-shedding operations during emergencies, disconnecting large blocks of load according to predetermined priorities. Among the distribution operations conducted by the satellite in real time would be: load reconfiguration, voltage regulation, fault isolation, and capacitor and feeder load control.
NASA Technical Reports Server (NTRS)
Happell, Nadine; Miksell, Steve; Carlisle, Candace
1989-01-01
A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.
NASA Technical Reports Server (NTRS)
Happell, Nadine; Miksell, Steve; Carlisle, Candace
1989-01-01
A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.
Constraints on Slow Slip from Landsliding and Faulting
NASA Astrophysics Data System (ADS)
Delbridge, Brent Gregory
The discovery of slow-slip has radically changed the way we understand the relative movement of Earth's tectonic plates and the accumulation of stress in fault zones that fail in large earthquakes. Prior to the discovery of slow-slip, faults were thought to relieve stress either through continuous aseismic sliding, as is the case for continental creeping faults, or in near instantaneous failure. Aseismic deformation reflects fault slip that is slow enough that both inertial forces and seismic radiation are negligible. The durations of observed aseismic slip events range from days to years, with displacements of up to tens of centimeters. These events are not unique to a specific depth range and occur on faults in a variety of tectonic settings. This aseismic slip can sometimes also trigger more rapid slip somewhere else on the fault, such as small embedded asperities. This is thought to be the mechanism generating observed Low Frequency Earthquakes (LFEs) and small repeating earthquakes. I have preformed a series of studies to better understanding the nature of tectonic faulting which are compiled here. The first is entitled "3D surface deformation derived from airborne interferometric UAVSAR: Application to the Slumgullion Landslide", and was originally published in the Journal of Geophysical Research in 2016. In order to understand how landslides respond to environmental forcing, we quantify how the hydro-mechanical forces controlling the Slumgullion Landslide express themselves kinematically in response to the infiltration of seasonal snowmelt. The well-studied Slumgullion Landslide, which is 3.9 km long and moves persistently at rates up to 2 cm/day is an ideal natural laboratory due to its large spatial extent and rapid deformation rates. The lateral boundaries of the landslide consist of strike-slip fault features, which over time have built up large flank ridges. The second study compiled here is entitled "Temporal variation of intermediate-depth earthquakes around the time of the M9.0 Tohoku-oki earthquake" and was originally published in Geophysical Research Letters in 2017. The temporal evolution of intermediate depth seismicity before and after the 2011 M 9.0 Tohoku-oki earthquake reveals interactions between plate interface slip and deformation in the subducting slab. I investigate seismicity rate changes in the upper and lower planes of the double seismic zone beneath northeast Japan. The average ratio of upper plane to lower plane activity and the mean deep aseismic slip rate both increased by factor of two. An increase of down-dip compression in the slab resulting from coseismic and postseismic deformation enhanced seismicity in the upper plane, which is dominated by events accommodating down-dip shortening from plate unbending. In the third and final study included here I use geodetic measurements to place a quantitative upper bound on the size of the slow slip accompanying large bursts of quasi-periodic tremors and LFEs on the Parkfield section of the SAF. We use a host of analysis methods to try to isolate the small signal due to the slow slip and characterize noise properties. We find that in addition to subduction zones, transform faults are also capable of producing ETSs. However, given the upper-bounds from our analysis, surface geodetic measurements of this slow slip is likely to remain highly challenging.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Smart intimation and location of faults in distribution system
NASA Astrophysics Data System (ADS)
Hari Krishna, K.; Srinivasa Rao, B.
2018-04-01
Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.
Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis
He, Qingbo; Wang, Xiangxiang; Zhou, Qiang
2014-01-01
Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Fault tolerance of artificial neural networks with applications in critical systems
NASA Technical Reports Server (NTRS)
Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.
1992-01-01
This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Rong, Mingzhe; Qiu, Juan; Liu, Dingxin; Su, Biao; Wu, Yi
A new type of algorithm for predicting the mechanical faults of a vacuum circuit breaker (VCB) based on an artificial neural network (ANN) is proposed in this paper. There are two types of mechanical faults in a VCB: operation mechanism faults and tripping circuit faults. An angle displacement sensor is used to measure the main axle angle displacement which reflects the displacement of the moving contact, to obtain the state of the operation mechanism in the VCB, while a Hall current sensor is used to measure the trip coil current, which reflects the operation state of the tripping circuit. Then an ANN prediction algorithm based on a sliding time window is proposed in this paper and successfully used to predict mechanical faults in a VCB. The research results in this paper provide a theoretical basis for the realization of online monitoring and fault diagnosis of a VCB.
NASA Astrophysics Data System (ADS)
Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.
2006-12-01
More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.
NASA Astrophysics Data System (ADS)
Goupil, Ph.; Puyou, G.
2013-12-01
This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).
SIRU development. Volume 3: Software description and program documentation
NASA Technical Reports Server (NTRS)
Oehrle, J.
1973-01-01
The development and initial evaluation of a strapdown inertial reference unit (SIRU) system are discussed. The SIRU configuration is a modular inertial subsystem with hardware and software features that achieve fault tolerant operational capabilities. The SIRU redundant hardware design is formulated about a six gyro and six accelerometer instrument module package. The six axes array provides redundant independent sensing and the symmetry enables the formulation of an optimal software redundant data processing structure with self-contained fault detection and isolation (FDI) capabilities. The basic SIRU software coding system used in the DDP-516 computer is documented.
Steady, modest slip over multiple earthquake cycles on the Owens Valley and Little Lake fault zones
NASA Astrophysics Data System (ADS)
Amos, C. B.; Haddon, E. K.; Burgmann, R.; Zielke, O.; Jayko, A. S.
2015-12-01
A comprehensive picture of current plate-boundary deformation requires integration of short-term geodetic records with longer-term geologic strain. Comparing rates of deformation across these time intervals highlights potential time-dependencies in both geodetic and geologic records and yields critical insight into the earthquake deformation process. The southern Walker Lane Belt in eastern California represents one location where short-term strain recorded by geodesy apparently outpaces longer-term geologic fault slip measured from displaced rocks and landforms. This discrepancy persists both for individual structures and across the width of the deforming zone, where ~1 cm/yr of current dextral shear exceeds Quaternary slip rates summed across individual faults. The Owens Valley and Little Lake fault systems form the western boundary of the southern Walker Lane and host a range of published slip rate estimates from ~1 - 7 mm/yr over varying time intervals based on both geodetic and geologic measurements. New analysis of offset geomorphic piercing lines from airborne lidar and field measurements along the Owens Valley fault provides a snapshot of deformation during individual earthquakes and over many seismic cycles. Viewed in context of previously reported ages from pluvial and other landforms in Owens Valley, these offsets suggest slip rates of ~0.6 - 1.6 mm/yr over the past 103 - 105 years. Such rates agree with similar estimates immediately to the south on the Little Lake fault, where lidar measurements indicate dextral slip averaging ~0.6 - 1.3 mm/yr over comparable time intervals. Taken together, these results suggest steady, modest slip in the absence of significant variations over the Mid-to-Late Quaternary for a ~200 km span of the southwestern Walker Lane. Our findings argue against the presence of long-range fault interactions and slip-rate variations for this portion of the larger, regional fault network. This result also suggests that faster slip-rate estimates from geodetic measurements reflect transients over much shorter time scales. Additionally, the persistence of relatively faster geodetic shear in comparison with time-averaged fault slip leaves open the possibility of significant off-fault deformation or slip on subsidiary structures across the Owens Valley.
Antecedent rivers and early rifting: a case study from the Plio-Pleistocene Corinth rift, Greece
NASA Astrophysics Data System (ADS)
Hemelsdaël, Romain; Ford, Mary; Malartre, Fabrice
2016-04-01
Models of early rifting present syn-rift sedimentation as the direct response to the development of normal fault systems where footwall-derived drainage supplies alluvial to lacustrine sediments into hangingwall depocentres. These models often include antecedent rivers, diverted into active depocentres and with little impact on facies distributions. However, antecedent rivers can supply a high volume of sediment from the onset of rifting. What are the interactions between major antecedent rivers and a growing normal fault system? What are the implications for alluvial stratigraphy and facies distributions in early rifts? These questions are investigated by studying a Plio-Pleistocene fluvial succession on the southern margin of the Corinth rift (Greece). In the northern Peloponnese, early syn-rift deposits are preserved in a series of uplifted E-W normal fault blocks (10-15 km long, 3-7 km wide). Detailed sedimentary logging and high resolution mapping of the syn-rift succession (400 to 1300 m thick) define the architecture of the early rift alluvial system. Magnetostratigraphy and biostratigraphic markers are used to date and correlate the fluvial succession within and between fault blocks. The age of the succession is between 4.0 and 1.8 Ma. We present a new tectonostratigraphic model for early rift basins based on our reconstructions. The early rift depositional system was established across a series of narrow normal fault blocks. Palaeocurrent data show that the alluvial basin was supplied by one major sediment entry point. A low sinuosity braided river system flowed over 15 to 30 km to the NE. Facies evolved downstream from coarse conglomerates to fined-grained fluvial deposits. Other minor sediment entry points supply linked and isolated depocentres. The main river system terminated eastward where it built stacked small deltas into a shallow lake (5 to 15 m deep) that occupied the central Corinth rift. The main fluvial axis remained constant and controlled facies distribution throughout the early rift evolution. We show that the length scale of fluvial facies transitions is greater than and therefore not related to fault spacing. First order facies variations instead occur at the scale of the full antecedent fluvial system. Strike-parallel subsidence variations in individual fault blocks represent a second order controlling factor on stratigraphic architecture. As depocentres enlarged through time, sediments progressively filled palaeorelief, and formed a continuous alluvial plain above active faults. There was limited creation of footwall relief and thus no significant consequent drainage system developed. Here, instead of being diverted toward subsiding zones, the drainage system overfilled the whole rift from the onset of faulting. Moreover, the zones of maximum subsidence on individual faults are aligned across strike parallel to the persistent fluvial axis. This implies that long-term sediment loading influenced the growth of normal faults. We conclude that a major antecedent drainage system inherited from the Hellenide mountain belt supplied high volumes of coarse sediment from the onset of faulting in the western Corinth rift (around 4 Ma). These observations demonstrate that antecedent drainage systems can be important in the tectono-sedimentary evolution of rift basins.
Li, Yunji; Wu, QingE; Peng, Li
2018-01-23
In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.
46 CFR 111.97-9 - Overcurrent protection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Electric Power-Operated Watertight Door Systems § 111.97-9 Overcurrent protection. Overcurrent devices must be arranged to isolate a fault with as little disruption of the system as possible...
Grimes, Craig B.; Cheadle, Michael J.; John, Barbara E.; Reiners, P.W.; Wooden, J.L.
2011-01-01
Oceanic detachment faulting represents a distinct mode of seafloor spreading at slow spreading mid-ocean ridges, but many questions persist about the thermal evolution and depth of faulting. We present new Pb/U and (U-Th)/He zircon ages and combine them with magnetic anomaly ages to define the cooling histories of gabbroic crust exposed by oceanic detachment faults at three sites along the Mid-Atlantic Ridge (Ocean Drilling Program (ODP) holes 1270D and 1275D near the 15??20???N Transform, and Atlantis Massif at 30??N). Closure temperatures for the Pb/U (???800??C-850??C) and (U-Th)/He (???210??C) isotopic systems in zircon bracket acquisition of magnetic remanence, collectively providing a temperature-time history during faulting. Results indicate cooling to ???200??C in 0.3-0.5 Myr after zircon crystallization, recording time-averaged cooling rates of ???1000??C- 2000??C/Myr. Assuming the footwalls were denuded along single continuous faults, differences in Pb/U and (U-Th)/He zircon ages together with independently determined slip rates allow the distance between the ???850??C and ???200??C isotherms along the fault plane to be estimated. Calculated distances are 8.4 ?? 4.2 km and 5.0 2.1 km from holes 1275D and 1270D and 8.4 ?? 1.4 km at Atlantis Massif. Estimating an initial subsurface fault dip of 50 and a depth of 1.5 km to the 200??C isotherm leads to the prediction that the ???850??C isotherm lies ???5-7 km below seafloor at the time of faulting. These depth estimates for active fault systems are consistent with depths of microseismicity observed beneath the hypothesized detachment fault at the TAG hydrothermal field and high-temperature fault rocks recovered from many oceanic detachment faults. Copyright 2011 by the American Geophysical Union.
Grauch, V.J.S.; Bauer, Paul W.; Drenth, Benjamin J.; Kelson, Keith I.
2017-01-01
We present a detailed example of how a subbasin develops adjacent to a transfer zone in the Rio Grande rift. The Embudo transfer zone in the Rio Grande rift is considered one of the classic examples and has been used as the inspiration for several theoretical models. Despite this attention, the history of its development into a major rift structure is poorly known along its northern extent near Taos, New Mexico. Geologic evidence for all but its young rift history is concealed under Quaternary cover. We focus on understanding the pre-Quaternary evidence that is in the subsurface by integrating diverse pieces of geologic and geophysical information. As a result, we present a substantively new understanding of the tectonic configuration and evolution of the northern extent of the Embudo fault and its adjacent subbasin.We integrate geophysical, borehole, and geologic information to interpret the subsurface configuration of the rift margins formed by the Embudo and Sangre de Cristo faults and the geometry of the subbasin within the Taos embayment. Key features interpreted include (1) an imperfect D-shaped subbasin that slopes to the east and southeast, with the deepest point ∼2 km below the valley floor located northwest of Taos at ∼36° 26′N latitude and 105° 37′W longitude; (2) a concealed Embudo fault system that extends as much as 7 km wider than is mapped at the surface, wherein fault strands disrupt or truncate flows of Pliocene Servilleta Basalt and step down into the subbasin with a minimum of 1.8 km of vertical displacement; and (3) a similar, wider than expected (5–7 km) zone of stepped, west-down normal faults associated with the Sangre de Cristo range front fault.From the geophysical interpretations and subsurface models, we infer relations between faulting and flows of Pliocene Servilleta Basalt and older, buried basaltic rocks that, combined with geologic mapping, suggest a revised rift history involving shifts in the locus of fault activity as the Taos subbasin developed. We speculate that faults related to north-striking grabens at the end of Laramide time formed the first west-down master faults. The Embudo fault may have initiated in early Miocene southwest of the Taos region. Normal-oblique slip on these early fault strands likely transitioned in space and time to dominantly left-lateral slip as the Embudo fault propagated to the northeast. During and shortly after eruption of Servilleta Basalt, proto-Embudo fault strands were active along and parallel to the modern, NE-aligned Rio Pueblo de Taos, ∼4–7 km basinward of the modern, mapped Embudo fault zone. Faults along the northeastern subbasin margin had northwest strikes for most of the period of subbasin formation and were located ∼5–7 km basinward of the modern Sangre de Cristo fault. The locus of fault activity shifted to more northerly striking faults within 2 km of the modern range front sometime after Servilleta volcanism had ceased. The northerly faults may have linked with the northeasterly proto-Embudo faults at this time, concurrent with the development of N-striking Los Cordovas normal faults within the interior of the subbasin. By middle Pleistocene(?) time, the Los Cordovas faults had become inactive, and the linked Embudo–Sangre de Cristo fault system migrated to the south, to the modern range front.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinovskaya, E.; Rutqvist, J.; Malo, M.
2014-01-21
In this paper, coupled reservoir-geomechanical (TOUGH-FLAC) modeling is applied for the first time to the St. Lawrence Lowlands region to evaluate the potential for shear failure along pre-existing high-angle normal faults, as well as the potential for tensile failure in the caprock units (Utica Shale and Lorraine Group). This activity is part of a general assessment of the potential for safe CO 2 injection into a sandstone reservoir (the Covey Hill Formation) within an Early Paleozoic sedimentary basin. Field and subsurface data are used to estimate the sealing properties of two reservoir-bounding faults (Yamaska and Champlain faults). The spatial variationsmore » in fluid pressure, effective minimum horizontal stress, and shear strain are calculated for different injection rates, using a simplified 2D geological model of the Becancour area, located ~110 km southwest of Quebec City. The simulation results show that initial fault permeability affects the timing, localization, rate, and length of fault shear slip. Contrary to the conventional view, our results suggest that shear failure may start earlier for a permeable fault than for a sealing fault, depending on the site-specific geologic setting. In simulations of a permeable fault, shear slip is nucleated along a 60 m long fault segment in a thin and brittle caprock unit (Utica Shale) trapped below a thicker and more ductile caprock unit (Lorraine Group) – and then subsequently progresses up to the surface. In the case of a sealing fault, shear failure occurs later in time and is localized along a fault segment (300 m) below the caprock units. The presence of the inclined low-permeable Yamaska Fault close to the injection well causes asymmetric fluid-pressure buildup and lateral migration of the CO 2 plume away from the fault, reducing the overall risk of CO 2 leakage along faults. Finally, fluid-pressure-induced tensile fracturing occurs only under extremely high injection rates and is localized below the caprock units, which remain intact, preventing upward CO 2 migration.« less
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
Fault Tolerant Real-Time Systems
1993-09-30
The ART (Advanced Real-Time Technology) Project of Carnegie Mellon University is engaged in wide ranging research on hard real - time systems . The...including hardware and software fault tolerance using temporal redundancy and analytic redundancy to permit the construction of real - time systems whose
Dating faults by quantifying shear heating
NASA Astrophysics Data System (ADS)
Maino, Matteo; Casini, Leonardo; Langone, Antonio; Oggiano, Giacomo; Seno, Silvio; Stuart, Finlay
2017-04-01
Dating brittle and brittle-ductile faults is crucial for developing seismic models and for understanding the geological evolution of a region. Improvement the geochronological approaches for absolute fault dating and its accuracy is, therefore, a key objective for the geological community. Direct dating of ancient faults may be attained by exploiting the thermal effects associated with deformation. Heat generated during faulting - i.e. the shear heating - is perhaps the best signal that provides a link between time and activity of a fault. However, other mechanisms not instantaneously related to fault motion can generate heating (advection, upwelling of hot fluids), resulting in a difficulty to determine if the thermal signal corresponds to the timing of fault movement. Recognizing the contribution of shear heating is a fundamental pre-requisite for dating the fault motion through thermochronometric techniques; therefore, a comprehensive thermal characterization of the fault zone is needed. Several methods have been proposed to assess radiometric ages of faulting from either newly grown crystals on fault gouges or surfaces (e.g. Ar/Ar dating), or thermochronometric reset of existing minerals (e.g. zircon and apatite fission tracks). In this contribution we show two cases of brittle and brittle-ductile faulting, one shallow thrust from the SW Alps and one HT, pseudotachylite-bearing fault zone in Sardinia. We applied, in both examples, a multidisciplinary approach that integrates field and micro-structural observations, petrographical characterization, geochemical and mineralogical analyses, fluid inclusion microthermometry and numerical modeling with thermochronometric dating of the two fault zones. We used the zircon (U-Th)/He thermochronometry to estimate the temperatures experienced by the shallow Alpine thrust. The ZHe thermochronometer has a closure temperature (Tc) of 180°C. Consequently, it is ideally suited to dating large heat-producing faults that were active at shallow depths (<6-7 km) where wall-rock temperature does not exceed Tc. On the other hand, the retrogressed pseudotachylites from the Variscan basement of Sardina developed in deeper crustal levels and produced considerably higher temperatures (>800 °C). They have been dated using laser ablation ICP-MS on monazites and zircons. This large dataset provides the necessary constraints to explore the potential causes of heating, its timing and how it is eventually related to fault motion.
Statistical mechanics and scaling of fault populations with increasing strain in the Corinth Rift
NASA Astrophysics Data System (ADS)
Michas, Georgios; Vallianatos, Filippos; Sammonds, Peter
2015-12-01
Scaling properties of fracture/fault systems are studied in order to characterize the mechanical properties of rocks and to provide insight into the mechanisms that govern fault growth. A comprehensive image of the fault network in the Corinth Rift, Greece, obtained through numerous field studies and marine geophysical surveys, allows for the first time such a study over the entire area of the Rift. We compile a detailed fault map of the area and analyze the scaling properties of fault trace-lengths by using a statistical mechanics model, derived in the framework of generalized statistical mechanics and associated maximum entropy principle. By using this framework, a range of asymptotic power-law to exponential-like distributions are derived that can well describe the observed scaling patterns of fault trace-lengths in the Rift. Systematic variations and in particular a transition from asymptotic power-law to exponential-like scaling are observed to be a function of increasing strain in distinct strain regimes in the Rift, providing quantitative evidence for such crustal processes in a single tectonic setting. These results indicate the organization of the fault system as a function of brittle strain in the Earth's crust and suggest there are different mechanisms for fault growth in the distinct parts of the Rift. In addition, other factors such as fault interactions and the thickness of the brittle layer affect how the fault system evolves in time. The results suggest that regional strain, fault interactions and the boundary condition of the brittle layer may control fault growth and the fault network evolution in the Corinth Rift.
NASA Astrophysics Data System (ADS)
Wedmore, L. N. J.; Faure Walker, J. P.; Roberts, G. P.; Sammonds, P. R.; McCaffrey, K. J. W.; Cowie, P. A.
2017-07-01
Current studies of fault interaction lack sufficiently long earthquake records and measurements of fault slip rates over multiple seismic cycles to fully investigate the effects of interseismic loading and coseismic stress changes on the surrounding fault network. We model elastic interactions between 97 faults from 30 earthquakes since 1349 A.D. in central Italy to investigate the relative importance of co-seismic stress changes versus interseismic stress accumulation for earthquake occurrence and fault interaction. This region has an exceptionally long, 667 year record of historical earthquakes and detailed constraints on the locations and slip rates of its active normal faults. Of 21 earthquakes since 1654, 20 events occurred on faults where combined coseismic and interseismic loading stresses were positive even though 20% of all faults are in "stress shadows" at any one time. Furthermore, the Coulomb stress on the faults that experience earthquakes is statistically different from a random sequence of earthquakes in the region. We show how coseismic Coulomb stress changes can alter earthquake interevent times by 103 years, and fault length controls the intensity of this effect. Static Coulomb stress changes cause greater interevent perturbations on shorter faults in areas characterized by lower strain (or slip) rates. The exceptional duration and number of earthquakes we model enable us to demonstrate the importance of combining long earthquake records with detailed knowledge of fault geometries, slip rates, and kinematics to understand the impact of stress changes in complex networks of active faults.
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would appear quasiperiodic, while at other times, the events can appear more Poissonian. Hence a given paleoseismic or instrumental record may not reflect the long-term seismicity of a fault, which has important implications for hazard assessment.
Mechanisms and rates of strength recovery in laboratory fault zones
NASA Astrophysics Data System (ADS)
Muhuri, Sankar Kumar
2001-07-01
The life cycle of a typical fault zone consists of repeated catastrophic seismic events during which much of the slip is accommodated interspersed with creep during the inter-seismic cycle. Fault strength is regenerated during this period as a result of several time-dependent, fluid assisted deformation mechanisms that are favored by high stresses along active fault zones. The strengthening is thought to be a function of the sum total of the rates of recovery due to these multiple creep processes as well as the rate of tectonic loading. Mechanisms and rates of strength recovery in laboratory fault zones were investigated in this research with the aid of several experimental designs. It was observed that wet faults recover strength in a time-dependent manner after slip due to operative creep processes. Subsequent loading results in unstable failure of a cohesive gouge zone with large associated stress drops. The failure process is similar to that observed for intact rocks. Dry laboratory faults in contrast do not recover strength and slip along them is always stable with no observable drop in stress. Strengthening in laboratory faults proceeds in a manner that is a logarithmic function of time. The recovery is attributable to fluid mediated mechanisms such as pressure solution, crack sealing and Ostwald ripening that collectively cause a reduction in porosity and enhance lithification of an unconsolidated gouge. Rates for the individual deformation mechanisms investigated in separate experimental setups were also observed to be a non-linear function of time. Pressure solution and Ostwald ripening are especially enhanced due to the significant volume fraction of fine particles within the gouge created due to cataclasis during slip. The results of this investigation may be applied to explain observations of rapid strengthening along large, active crustal fault zones such as parts of the San Andreas Fault system in California and the Nojima fault in Japan. Presence of fault seals in clean hydrocarbon reservoirs with minor clay content as in several North Sea fields may also be a manifestation of similar deformation processes.
Bunch, Richard H.
1986-01-01
A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.
Photomosaics and logs of trenches on the San Andreas Fault, Thousand Palms Oasis, California
Fumal, Thomas E.; Frost, William T.; Garvin, Christopher; Hamilton, John C.; Jaasma, Monique; Rymer, Michael J.
2004-01-01
We present photomosaics and logs of the walls of trenches excavated for a paleoseismic study at Thousand Palms Oasis (Fig. 1). The site is located on the Mission Creek strand of the San Andreas fault zone, one of two major active strands of the fault in the Indio Hills along the northeast margin of the Coachella Valley (Fig. 2). The Coachella Valley section is the most poorly understood major part of the San Andreas fault with regard to slip rate and timing of past large-magnitude earthquakes, and therefore earthquake hazard. No large earthquakes have occurred for more than three centuries, the longest elapsed time for any part of the southern San Andreas fault. In spite of this, the Working Group on California Earthquake Probabilities (1995) assigned the lowest 30-year conditional probability on the southern San Andreas fault to the Coachella Valley. Models of the behavior of this part of the fault, however, have been based on very limited geologic data. The Thousand Palms Oasis is an attractive location for paleoseismic study primarily because of the well-bedded late Holocene sedimentary deposits with abundant layers of organic matter for radiocarbon dating necessary to constrain the timing of large prehistoric earthquakes. Previous attempts to develop a chronology of paleoearthquakes for the region have been hindered by the scarcity of in-situ 14C-dateable material for age control in this desert environment. Also, the fault in the vicinity of Thousand Palms Oasis consists of a single trace that is well expressed, both geomorphically and as a vegetation lineament (Figs. 2, 3). Results of our investigations are discussed in Fumal et al. (2002) and indicate that four and probably five surface-rupturing earthquakes occurred along this part of the fault during the past 1200 years. The average recurrence time for these earthquakes is 215 ± 25 years, although interevent times may have been as short as a few decades or as long as 400 years. Thus, although the elapsed time since the most recent earthquake, about 320 years, is about 50% longer than the average recurrence time, it is not necessarily unprecedented.
Energy-efficient fault tolerance in multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Guo, Yifeng
The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is investigated, where tasks' main copies are executed ASAP while backup copies ALAP to reduce the overlapped execution of main and backup copies of the same task and thus reduce energy consumption. All proposed techniques are evaluated through extensive simulations and compared with other state-of-the-art approaches. The simulation results confirm that the proposed schemes can preserve the system reliability while still achieving substantial energy savings. Finally, for both SS and POED based Energy-Efficient Fault-Tolerant (EEFT) schemes, a series of recovery strategies are designed when more than one (transient and permanent) faults need to be tolerated.
Waldhauser, F.; Ellsworth, W.L.
2002-01-01
The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital waveforms are available, we find that fewer than 6.5% of the earthquakes can be classified as repeating earthquakes, events that rupture the same fault patch more than one time. These most commonly are located in the shallow creeping part of the fault, or within the streaks at greater depth. The slow repeat rate of 2-3 times within the 15-year observation period for events with magnitudes around M = 1.5 is indicative of a low slip rate or a high stress drop. The absence of microearthquakes over large, contiguous areas of the northern Hayward Fault plane in the depth interval from ???5 to 10 km and the concentrations of seismicity at these depths suggest that the aseismic regions are either locked or retarded and are storing strain energy for release in future large-magnitude earthquakes.
Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.
NASA Astrophysics Data System (ADS)
Klein, W.
2017-12-01
Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.
NASA Astrophysics Data System (ADS)
Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun
2007-04-01
Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.
Research on criticality analysis method of CNC machine tools components under fault rate correlation
NASA Astrophysics Data System (ADS)
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.
Xiao, Bing; Yin, Shen
2018-02-01
This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.
Anomaly Resolution in the International Space Station
NASA Technical Reports Server (NTRS)
Evans, William A.
2000-01-01
Topics include post flight 2A status, groundrules, anomaly resolution, Early Communications Subsystem anomaly and resolution, Logistics and Maintenance plan, case for obscuration, case for electrical short, and manual fault isolation, and post mission analysis. Photographs from flight 2A.1 are used to illustrate anomalies.
Manufacturing Methods and Technology for Digital Fault Isolation for Printed Circuit Boards.
1979-08-25
microprocessors and support chips, ROMs, RAMs, UARTs , etc. They also include rules for busses and memory testing. The special rules for test points emphasize...I 8). UART .. ...................................................... I 9). SAT...0.0 I ( 8). UART ...................................................... 0.0O S 9). SAT
NASA Technical Reports Server (NTRS)
Rushby, John; Crow, Judith
1990-01-01
The authors explore issues in the specification, verification, and validation of artificial intelligence (AI) based software, using a prototype fault detection, isolation and recovery (FDIR) system for the Manned Maneuvering Unit (MMU). They use this system as a vehicle for exploring issues in the semantics of C-Language Integrated Production System (CLIPS)-style rule-based languages, the verification of properties relating to safety and reliability, and the static and dynamic analysis of knowledge based systems. This analysis reveals errors and shortcomings in the MMU FDIR system and raises a number of issues concerning software engineering in CLIPs. The authors came to realize that the MMU FDIR system does not conform to conventional definitions of AI software, despite the fact that it was intended and indeed presented as an AI system. The authors discuss this apparent disparity and related questions such as the role of AI techniques in space and aircraft operations and the suitability of CLIPS for critical applications.
Fault isolation through no-overhead link level CRC
Chen, Dong; Coteus, Paul W.; Gara, Alan G.
2007-04-24
A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng
2017-02-01
To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.
Li, Dan; Hu, Xiaoguang
2017-03-01
Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The Honey Lake fault zone, northeastern California: Its nature, age, and displacement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, D.L.; Saucedo, G.J.; Grose, T.L.T.
The Honey Lake fault zone of northeastern California is composed of en echelon, northwest trending faults that form the boundary between the Sierra Nevada and the Basin Ranges provinces. As such the Honey Lake fault zone can be considered part of the Sierra Nevada frontal fault system. It is also part of the Walker Lane of Nevada. Faults of the Honey Lake zone are vertical with right-lateral oblique displacements. The cumulative vertical component of displacement along the fault zone is on the order of 800 m and right-lateral displacement is at least 10 km (6 miles) but could be considerablymore » more. Oligocene to Miocene (30 to 22 Ma) age rhyolite tuffs can be correlated across the zone, but mid-Miocene andesites do not appear to be correlative indicating the faulting began in early to mid-Miocene time. Volcanic rocks intruded along faults of the zone, dated at 16 to 8 Ma, further suggest that faulting in the Honey Lake zone was initiated during mid-Miocene time. Late Quaternary to Holocene activity is indicated by offset of the 12,000 year old Lake Lahontan high stand shoreline and the surface rupture associated with the 1950 Fort Sage earthquake.« less
GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks
NASA Astrophysics Data System (ADS)
Kong, Changduk; Koo, Youngju
2011-04-01
The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.
Fault current limiter with shield and adjacent cores
Darmann, Francis Anthony; Moriconi, Franco; Hodge, Eoin Patrick
2013-10-22
In a fault current limiter (FCL) of a saturated core type having at least one coil wound around a high permeability material, a method of suppressing the time derivative of the fault current at the zero current point includes the following step: utilizing an electromagnetic screen or shield around the AC coil to suppress the time derivative current levels during zero current conditions.
NASA Astrophysics Data System (ADS)
Dou, Xinyu; Yin, Hongxi; Yue, Hehe; Jin, Yu; Shen, Jing; Li, Lin
2015-09-01
In this paper, a real-time online fault monitoring technique for chaos-based passive optical networks (PONs) is proposed and experimentally demonstrated. The fault monitoring is performed by the chaotic communication signal. The proof-of-concept experiments are demonstrated for two PON structures, i.e., wavelength-division-multiplexing (WDM) PON and Ethernet PON (EPON), respectively. For WDM PON, two monitoring approaches are investigated, one deploying a chaotic optical time domain reflectometry (OTDR) for each transmitter, and the other using only one tunable chaotic OTDR. The experimental results show that the faults at beyond 20 km from the OLT can be detected and located. The spatial resolution of the tunable chaotic OTDR is an order of magnitude of centimeter. Meanwhile, the monitoring process can operate in parallel with the chaotic optical secure communications. The proposed technique has benefits of real-time, online, precise fault location, and simple realization, which will significantly reduce the cost of operation, administration and maintenance (OAM) of PON.
Seismic fault zone trapped noise
NASA Astrophysics Data System (ADS)
Hillers, G.; Campillo, M.; Ben-Zion, Y.; Roux, P.
2014-07-01
Systematic velocity contrasts across and within fault zones can lead to head and trapped waves that provide direct information on structural units that are important for many aspects of earthquake and fault mechanics. Here we construct trapped waves from the scattered seismic wavefield recorded by a fault zone array. The frequency-dependent interaction between the ambient wavefield and the fault zone environment is studied using properties of the noise correlation field. A critical frequency fc ≈ 0.5 Hz defines a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise. The increased randomization of in-fault propagation directions produces a wavefield that is trapped in a waveguide/cavity-like structure associated with the low-velocity damage zone. Dense spatial sampling allows the resolution of a near-field focal spot, which emerges from the superposition of a collapsing, time reversed wavefront. The shape of the focal spot depends on local medium properties, and a focal spot-based fault normal distribution of wave speeds indicates a ˜50% velocity reduction consistent with estimates from a far-field travel time inversion. The arrival time pattern of a synthetic correlation field can be tuned to match properties of an observed pattern, providing a noise-based imaging tool that can complement analyses of trapped ballistic waves. The results can have wide applicability for investigating the internal properties of fault damage zones, because mechanisms controlling the emergence of trapped noise have less limitations compared to trapped ballistic waves.
The timing of strike-slip shear along the Ranong and Khlong Marui faults, Thailand
NASA Astrophysics Data System (ADS)
Watkinson, Ian; Elders, Chris; Batt, Geoff; Jourdan, Fred; Hall, Robert; McNaughton, Neal J.
2011-09-01
The timing of shear along many important strike-slip faults in Southeast Asia, such as the Ailao Shan-Red River, Mae Ping and Three Pagodas faults, is poorly understood. We present 40Ar/39Ar, U-Pb SHRIMP and microstructural data from the Ranong and Khlong Marui faults of Thailand to show that they experienced a major period of ductile dextral shear during the middle Eocene (48-40 Ma, centered on 44 Ma) which followed two phases of dextral shear along the Ranong Fault, before the Late Cretaceous (>81 Ma) and between the late Paleocene and early Eocene (59-49 Ma). Many of the sheared rocks were part of a pre-kinematic crystalline basement complex, which partially melted and was intruded by Late Cretaceous (81-71 Ma) and early Eocene (48 Ma) tin-bearing granites. Middle Eocene dextral shear at temperatures of ˜300-500°C formed extensive mylonite belts through these rocks and was synchronous with granitoid vein emplacement. Dextral shear along the Ranong and Khlong Marui faults occurred at the same time as sinistral shear along the Mae Ping and Three Pagodas faults of northern Thailand, a result of India-Burma coupling in advance of India-Asia collision. In the late Eocene (<37 Ma) the Ranong and Khlong Marui faults were reactivated as curved sinistral branches of the Mae Ping and Three Pagodas faults, which were accommodating lateral extrusion during India-Asia collision and Himalayan orogenesis.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.
2010-12-01
Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.
NASA Astrophysics Data System (ADS)
Liu, Z.; Lundgren, P.; Liang, C.; Farr, T. G.; Fielding, E. J.
2017-12-01
The improved spatiotemporal resolution of surface deformation from recent satellite and airborne InSAR measurements provides a great opportunity to improve our understanding of both tectonic and non-tectonic processes. In central California the primary plate boundary fault system (San Andreas fault) lies adjacent to the San Joaquin Valley (SJV), a vast structural trough that accounts for about one-sixth of the United Sates' irrigated land and one-fifth of its extracted groundwater. The central San Andreas fault (CSAF) displays a range of fault slip behavior with creeping in its central segment that decreases towards its northwest and southeast ends, where it transitions to being fully locked. Despite much progress, many questions regarding fault and anthropogenic processes in the region still remain. In this study, we combine satellite InSAR and NASA airborne UAVSAR data to image fault and anthropogenic deformation. The UAVSAR data cover fault perpendicular swaths imaged from opposing look directions and fault parallel swaths since 2009. The much finer spatial resolution and optimized viewing geometry provide important constraints on near fault deformation and fault slip at very shallow depth. We performed a synoptic InSAR time series analysis using Sentinel-1, ALOS, and UAVSAR interferograms. We estimate azimuth mis-registration between single look complex (SLC) images of Sentinel-1 in a stack sense to achieve accurate azimuth co-registration between SLC images for low coherence and/or long interval interferometric pairs. We show that it is important to correct large-scale ionosphere features in ALOS-2 ScanSAR data for accurate deformation measurements. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground subsidence in the SJV due to over-exploitation of groundwater. InSAR time series are compared to GPS and well-water hydraulic head in-situ time series to understand water storage processes and mass loading changes. We present model results to assess the influence of anthropogenic processes on surface deformation and fault mechanics.
NASA Technical Reports Server (NTRS)
Richard, Stephen M.
1992-01-01
A paleogeographic reconstruction of southeastern California and southwestern Arizona at 10 Ma was made based on available geologic and geophysical data. Clockwise rotation of 39 deg was reconstructed in the eastern Transverse Ranges, consistent with paleomagnetic data from late Miocene volcanic rocks, and with slip estimates for left-lateral faults within the eastern Transverse Ranges and NW-trending right lateral faults in the Mojave Desert. This domain of rotated rocks is bounded by the Pinto Mountain fault on the north. In the absence of evidence for rotation of the San Bernardino Mountains or for significant right slip faults within the San Bernardino Mountains, the model requires that the late Miocene Pinto Mountain fault become a thrust fault gaining displacement to the west. The Squaw Peak thrust system of Meisling and Weldon may be a western continuation of this fault system. The Sheep Hole fault bounds the rotating domain on the east. East of this fault an array of NW-trending right slip faults and south-trending extensional transfer zones has produced a basin and range physiography while accumulating up to 14 km of right slip. This maximum is significantly less than the 37.5 km of right slip required in this region by a recent reconstruction of the central Mojave Desert. Geologic relations along the southern boundary of the rotating domain are poorly known, but this boundary is interpreted to involve a series of curved strike slip faults and non-coaxial extension, bounded on the southeast by the Mammoth Wash and related faults in the eastern Chocolate Mountains. Available constraints on timing suggest that Quaternary movement on the Pinto Mountain and nearby faults is unrelated to the rotation of the eastern Transverse Ranges, and was preceded by a hiatus during part of Pliocene time which followed the deformation producing the rotation. The reconstructed Clemens Well fault in the Orocopia Mountains, proposed as a major early Miocene strand of the San Andreas fault, projects eastward towards Arizona, where early Miocene rocks and structures are continuous across its trace. The model predicts a 14 deg clockwise rotation and 55 km extension along the present trace of the San Andreas fault during late Miocene and early Pliocene time. Palinspastic reconstructions of the San Andreas system based on this proposed reconstruction may be significantly modified from current models.
Catchings, R.D.; Rymer, M.J.; Goldman, M.R.; Prentice, C.S.; Sickler, R.R.
2013-01-01
The San Francisco Public Utilities Commission is seismically retrofitting the water delivery system at San Andreas Lake, San Mateo County, California, where the reservoir intake system crosses the San Andreas Fault (SAF). The near-surface fault location and geometry are important considerations in the retrofit effort. Because the SAF trends through highly distorted Franciscan mélange and beneath much of the reservoir, the exact trace of the 1906 surface rupture is difficult to determine from surface mapping at San Andreas Lake. Based on surface mapping, it also is unclear if there are additional fault splays that extend northeast or southwest of the main surface rupture. To better understand the fault structure at San Andreas Lake, the U.S. Geological Survey acquired a series of seismic imaging profiles across the SAF at San Andreas Lake in 2008, 2009, and 2011, when the lake level was near historical lows and the surface traces of the SAF were exposed for the first time in decades. We used multiple seismic methods to locate the main 1906 rupture zone and fault splays within about 100 meters northeast of the main rupture zone. Our seismic observations are internally consistent, and our seismic indicators of faulting generally correlate with fault locations inferred from surface mapping. We also tested the accuracy of our seismic methods by comparing our seismically located faults with surface ruptures mapped by Schussler (1906) immediately after the April 18, 1906 San Francisco earthquake of approximate magnitude 7.9; our seismically determined fault locations were highly accurate. Near the reservoir intake facility at San Andreas Lake, our seismic data indicate the main 1906 surface rupture zone consists of at least three near-surface fault traces. Movement on multiple fault traces can have appreciable engineering significance because, unlike movement on a single strike-slip fault trace, differential movement on multiple fault traces may exert compressive and extensional stresses on built structures within the fault zone. Such differential movement and resulting distortion of built structures appear to have occurred between fault traces at the gatewell near the southern end of San Andreas Lake during the 1906 San Francisco earthquake (Schussler, 1906). In addition to the three fault traces within the main 1906 surface rupture zone, our data indicate at least one additional fault trace (or zone) about 80 meters northeast of the main 1906 surface rupture zone. Because ground shaking also can damage structures, we used fault-zone guided waves to investigate ground shaking within the fault zones relative to ground shaking outside the fault zones. Peak ground velocity (PGV) measurements from our guided-wave study indicate that ground shaking is greater at each of the surface fault traces, varying with the frequency of the seismic data and the wave type (P versus S). S-wave PGV increases by as much as 5–6 times at the fault traces relative to areas outside the fault zone, and P-wave PGV increases by as much as 3–10 times. Assuming shaking increases linearly with increasing earthquake magnitude, these data suggest strong shaking may pose a significant hazard to built structures that extend across the fault traces. Similarly complex fault structures likely underlie other strike-slip faults (such as the Hayward, Calaveras, and Silver Creek Faults) that intersect structures of the water delivery system, and these fault structures similarly should be investigated.
Previously unrecognized now-inactive strand of the North Anatolian fault in the Thrace basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perincek, D.
1988-08-01
The North Anatolian fault is a major 1,200 km-long transform fault bounding the Anatolian plate to the north. It formed in late middle Miocene time as a broad shear zone with a number of strands splaying westward in a horsetail fashion. Later, movement became localized along the stem, and the southerly and northerly splays became inactive. One such right-lateral, now-inactive splay is the west-northwest-striking Thrace strike-slip fault system, consisting of three subparallel strike-slip faults. From north to south these are the Kirklareli, Lueleburgaz, and Babaeski fault zones, extending {plus minus} 130 km along the strike. The Thrace fault zone probablymore » connected with the presently active northern strand of the North Anatolian fault in the Sea of Marmara in the southeast and may have joined the Plovdiv graben zone in Bulgaria in the northwest. The Thrace basin in which the Thrace fault system is located, is Cenozoic with a sedimentary basin fill from middle Eocene to Pliocene. The Thrace fault system formed in pre-Pliocene time and had become inactive by the Pliocene. Strike-slip fault zones with normal and reverse separation are detected by seismic reflection profiles and subsurface data. Releasing bend extensional structures (e.g., near the town of Lueleburgaz) and restraining bend compressional structures (near Vakiflar-1 well) are abundant on the fault zones. Umurca and Hamitabad fields are en echelon structures on the Lueleburgaz fault zone. The Thrace strike-slip fault system has itself a horsetail shape, the various strands of which become younger southward. The entire system died before the Pliocene, and motion on the North Anatolian fault zone began to be accommodated in the Sea of Marmara region. Thus the Thrace fault system represents the oldest strand of the North Anatolian fault in the west.« less
Preliminary results on earthquake triggered landslides for the Haiti earthquake (January 2010)
NASA Astrophysics Data System (ADS)
van Westen, Cees; Gorum, Tolga
2010-05-01
This study presents the first results on an analysis of the landslides triggered by the Ms 7.0 Haiti earthquake that occurred on January 12, 2010 in the boundary region of the Pacific Plate and the North American plate. The fault is a left lateral strike slip fault with a clear surface expression. According to the USGS earthquake information the Enriquillo-Plantain Garden fault system has not produced any major earthquake in the last 100 years, and historical earthquakes are known from 1860, 1770, 1761, 1751, 1684, 1673, and 1618, though none of these has been confirmed in the field as associated with this fault. We used high resolution satellite imagery available for the pre and post earthquake situations, which were made freely available for the response and rescue operations. We made an interpretation of all co-seismic landslides in the epicentral area. We conclude that the earthquake mainly triggered landslide in the northern slope of the fault-related valley and in a number of isolated area. The earthquake apparently didn't trigger many visible landslides within the slum areas on the slopes in the southern part of Port-au-Prince and Carrefour. We also used ASTER DEM information to relate the landslide occurrences with DEM derivatives.
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Toda, S.; Stein, R.S.
2000-01-01
The 1998 Antarctic plate earthquake produced clusters of aftershocks (MW ??? 6.4) up to 80 km from the fault rupture and up to 100 km beyond the end of the rupture. Because the mainshock occurred far from the nearest plate boundary and the nearest recorded earthquake, it is unusually isolated from the stress perturbations caused by other earthquakes, making it a good candidate for stress transfer analysis despite the absence of near-field observations. We tested whether the off-fault aftershocks lie in regions brought closer to Coulomb failure by the main rupture. We evaluated four published source models for the main rupture. In fourteen tests using different aftershocks sets and allowing the rupture sources to be shifted within their uncertainties, 6 were significant at ??? 99% confidence, 3 at > 95% confidence, and 5 were not significant (< 95% level). For the 9 successful tests, the stress at the site of the aftershocks was typically increased by 1-2 bars (0.1-0.2 MPa). Thus the Antarctic plate event, together with the 1992 MW=7.3 Landers and its MW=6.5 Big Bear aftershock 40 km from the main fault, supply evidence that small stress changes might indeed trigger large earthquakes far from the main fault rupture.
NASA Astrophysics Data System (ADS)
Liu, Zhen; Lundgren, Paul
2016-07-01
The San Andreas Fault (SAF) system is the primary plate boundary in California, with the central SAF (CSAF) lying adjacent to the San Joaquin Valley (SJV), a vast structural trough that accounts for about one-sixth of the United Sates' irrigated land and one-fifth of its extracted groundwater. The CSAF displays a range of fault slip behavior with creeping in its central segment that decreases towards its northwest and southeast ends, where the fault transitions to being fully locked. At least six Mw ~6.0 events since 1857 have occurred near the Parkfield transition, most recently in 2004. Large earthquakes also occurred on secondary faults parallel to the SAF, the result of distributed deformation across the plate boundary zone. Recent studies have revealed the complex interaction between anthropogenic related groundwater depletion and the seismic activity on adjacent faults through stress interaction. Despite recent progress, many questions regarding fault and anthropogenic processes in the region still remain. For example, how is the relative plate motion accommodated between the CSAF and off-fault deformation? What is the distribution of fault creep and slip deficit at shallow depths? What are the spatiotemporal variations of fault slip? What are the spatiotemporal characteristics of anthropogenic and lithospheric processes and how do they interact with each other? To address these, we combine satellite InSAR and NASA airborne UAVSAR data to image on and off-fault deformation. The UAVSAR data cover fault perpendicular swaths imaged from opposing look directions and fault parallel swaths since 2009. The much finer spatial resolution and optimized viewing geometry provide important constraints on near fault deformation and fault slip at very shallow depth. We performed a synoptic InSAR time series analysis using ERS-1/2, Envisat, ALOS and UAVSAR interferograms. The combined C-band ERS-1/2 and Envisat data provide a long time interval of SAR data over the region, but are subject to severe decorrelation. The L-band ALOS and UAVSAR SAR sensors provide improved coherence compared to the shorter wavelength radar data. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. Modeling selected fault transects reveals a distinct change in surface creep and shallow slip deficit from the central creeping section towards the Parkfield transition. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground subsidence in the SJV due to over-exploitation of groundwater. Groundwater related deformation is spatially and temporally variable and is composed of both recoverable elastic and non-recoverable inelastic components. InSAR time series are compared to GPS and well-water hydraulic head in-situ time series to understand water storage processes and mass loading changes. We are currently developing poroelastic finite element method models to assess the influence of anthropogenic processes on surface deformation and fault mechanics. Ongoing work is to better constrain both tectonic and non-tectonic processes and understand their interaction and implication for regional earthquake hazard.
Editing wild points in isolation - Fast agreement for reliable systems (Preliminary version)
NASA Technical Reports Server (NTRS)
Kearns, Phil; Evans, Carol
1989-01-01
Consideration is given to the intuitively appealing notion of discarding sensor values which are strongly suspected of being erroneous in a modified approximate agreement protocol. Approximate agreement with editing imposes a time bound upon the convergence of the protocol - no such bound was possible for the original approximate agreement protocol. This new approach is potentially useful in the construction of asynchronous fault tolerant systems. The main result is that a wild-point replacement technique called t-worst editing can be shown to guarantee convergence of the approximate agreement protocol to a valid agreement value. Results are presented for a four-processor synchronous system in which a single processor may be faulty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maunz, Peter; Wilhelm, Lukas
Qubits can be encoded in clock states of trapped ions. These states are well isolated from the environment resulting in long coherence times [1] while enabling efficient high-fidelity qubit interactions mediated by the Coulomb coupled motion of the ions in the trap. Quantum states can be prepared with high fidelity and measured efficiently using fluorescence detection. State preparation and detection with 99.93% fidelity have been realized in multiple systems [1,2]. Single qubit gates have been demonstrated below rigorous fault-tolerance thresholds [1,3]. Two qubit gates have been realized with more than 99.9% fidelity [4,5]. Quantum algorithms have been demonstrated on systemsmore » of 5 to 15 qubits [6–8].« less