Sample records for faults robustness evaluation

  1. Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)

    2001-01-01

    This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.

  2. Robust Gain-Scheduled Fault Tolerant Control for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Gregory, Irene

    2007-01-01

    This paper presents an application of robust gain-scheduled control concepts using a linear parameter-varying (LPV) control synthesis method to design fault tolerant controllers for a civil transport aircraft. To apply the robust LPV control synthesis method, the nonlinear dynamics must be represented by an LPV model, which is developed using the function substitution method over the entire flight envelope. The developed LPV model associated with the aerodynamic coefficient uncertainties represents nonlinear dynamics including those outside the equilibrium manifold. Passive and active fault tolerant controllers (FTC) are designed for the longitudinal dynamics of the Boeing 747-100/200 aircraft in the presence of elevator failure. Both FTC laws are evaluated in the full nonlinear aircraft simulation in the presence of the elevator fault and the results are compared to show pros and cons of each control law.

  3. Evaluation of an Enhanced Bank of Kalman Filters for In-Flight Aircraft Engine Sensor Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2004-01-01

    In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.

  4. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  5. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  6. A robust detector for rolling element bearing condition monitoring based on the modulation signal bispectrum and its performance evaluation against the Kurtogram

    NASA Astrophysics Data System (ADS)

    Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.

    2018-02-01

    Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.

  7. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers

    PubMed Central

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-01-01

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255

  8. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.

    PubMed

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-04-11

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.

  9. Robust operative diagnosis as problem solving in a hypothesis space

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.

    1988-01-01

    This paper describes an approach that formulates diagnosis of physical systems in operation as problem solving in a hypothesis space. Such a formulation increases robustness by: (1) incremental hypotheses construction via dynamic inputs, (2) reasoning at a higher level of abstraction to construct hypotheses, and (3) partitioning the space by grouping fault hypotheses according to the type of physical system representation and problem solving techniques used in their construction. It was implemented for a turbofan engine and hydraulic subsystem. Evaluation of the implementation on eight actual aircraft accident cases involving engine faults provided very promising results.

  10. H∞ robust fault-tolerant controller design for an autonomous underwater vehicle's navigation control system

    NASA Astrophysics Data System (ADS)

    Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian

    2010-03-01

    In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.

  11. Rolling element bearing fault diagnosis based on Over-Complete rational dilation wavelet transform and auto-correlation of analytic energy operator

    NASA Astrophysics Data System (ADS)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2018-02-01

    Local damage in rolling element bearings usually generates periodic impulses in vibration signals. The severity, repetition frequency and the fault excited resonance zone by these impulses are the key indicators for diagnosing bearing faults. In this paper, a methodology based on over complete rational dilation wavelet transform (ORDWT) is proposed, as it enjoys a good shift invariance. ORDWT offers flexibility in partitioning the frequency spectrum to generate a number of subbands (filters) with diverse bandwidths. The selection of the optimal filter that perfectly overlaps with the bearing fault excited resonance zone is based on the maximization of a proposed impulse detection measure "Temporal energy operated auto correlated kurtosis". The proposed indicator is robust and consistent in evaluating the impulsiveness of fault signals in presence of interfering vibration such as heavy background noise or sporadic shocks unrelated to the fault or normal operation. The structure of the proposed indicator enables it to be sensitive to fault severity. For enhanced fault classification, an autocorrelation of the energy time series of the signal filtered through the optimal subband is proposed. The application of the proposed methodology is validated on simulated and experimental data. The study shows that the performance of the proposed technique is more robust and consistent in comparison to the original fast kurtogram and wavelet kurtogram.

  12. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    PubMed

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

  14. Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G.

    2000-01-01

    The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.

  15. Control of large flexible space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.

    1986-01-01

    Progress in robust design of generalized parity relations, design of failure sensitive observers using the geometric system theory of Wonham, computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management features, and the design and evaluation od control systems for structures having nonlinear joints are described.

  16. A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.

    PubMed

    Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent

    2017-01-01

    In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Reliability issues in active control of large flexible space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.

    1986-01-01

    Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management

  18. A signal-based fault detection and classification method for heavy haul wagons

    NASA Astrophysics Data System (ADS)

    Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan

    2017-12-01

    This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.

  19. Sliding Mode Approaches for Robust Control, State Estimation, Secure Communication, and Fault Diagnosis in Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Ablay, Gunyaz

    Using traditional control methods for controller design, parameter estimation and fault diagnosis may lead to poor results with nuclear systems in practice because of approximations and uncertainties in the system models used, possibly resulting in unexpected plant unavailability. This experience has led to an interest in development of robust control, estimation and fault diagnosis methods. One particularly robust approach is the sliding mode control methodology. Sliding mode approaches have been of great interest and importance in industry and engineering in the recent decades due to their potential for producing economic, safe and reliable designs. In order to utilize these advantages, sliding mode approaches are implemented for robust control, state estimation, secure communication and fault diagnosis in nuclear plant systems. In addition, a sliding mode output observer is developed for fault diagnosis in dynamical systems. To validate the effectiveness of the methodologies, several nuclear plant system models are considered for applications, including point reactor kinetics, xenon concentration dynamics, an uncertain pressurizer model, a U-tube steam generator model and a coupled nonlinear nuclear reactor model.

  20. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.

  1. Test plan. GCPS task 7, subtask 7.1: IHM development

    NASA Technical Reports Server (NTRS)

    Greenberg, H. S.

    1994-01-01

    The overall objective of Task 7 is to identify cost-effective life cycle integrated health management (IHM) approaches for a reusable launch vehicle's primary structure. Acceptable IHM approaches must: eliminate and accommodate faults through robust designs, identify optimum inspection/maintenance periods, automate ground and on-board test and check-out, and accommodate and detect structural faults by providing wide and localized area sensor and test coverage as required. These requirements are elements of our targeted primary structure low cost operations approach using airline-like maintenance by exception philosophies. This development plan will follow an evolutionary path paving the way to the ultimate development of flight-quality production, operations, and vehicle systems. This effort will be focused on maturing the recommended sensor technologies required for localized and wide area health monitoring to a technology readiness level (TRL) of 6 and to establish flight ready system design requirements. The following is a brief list of IHM program objectives: design out faults by analyzing material properties, structural geometry, and load and environment variables and identify failure modes and damage tolerance requirements; design in system robustness while meeting performance objectives (weight limitations) of the reusable launch vehicle primary structure; establish structural integrity margins to preclude the need for test and checkout and predict optimum inspection/maintenance periods through life prediction analysis; identify optimum fault protection system concept definitions combining system robustness and integrity margins established above with cost effective health monitoring technologies; and use coupons, panels, and integrated full scale primary structure test articles to identify, evaluate, and characterize the preferred NDE/NDI/IHM sensor technologies that will be a part of the fault protection system.

  2. Robustness analysis of elastoplastic structure subjected to double impulse

    NASA Astrophysics Data System (ADS)

    Kanno, Yoshihiro; Takewaki, Izuru

    2016-11-01

    The double impulse has extensively been used to evaluate the critical response of an elastoplastic structure against a pulse-type input, including near-fault earthquake ground motions. In this paper, we propose a robustness assessment method for elastoplastic single-degree-of-freedom structures subjected to the double impulse input. Uncertainties in the initial velocity of the input, as well as the natural frequency and the strength of the structure, are considered. As fundamental properties of the structural robustness, we show monotonicity of the robustness measure with respect to the natural frequency. In contrast, we show that robustness is not necessarily improved even if the structural strength is increased. Moreover, the robustness preference between two structures with different values of structural strength can possibly reverse when the performance requirement is changed.

  3. Robustness to Faults Promotes Evolvability: Insights from Evolving Digital Circuits

    PubMed Central

    Nolfi, Stefano

    2016-01-01

    We demonstrate how the need to cope with operational faults enables evolving circuits to find more fit solutions. The analysis of the results obtained in different experimental conditions indicates that, in absence of faults, evolution tends to select circuits that are small and have low phenotypic variability and evolvability. The need to face operation faults, instead, drives evolution toward the selection of larger circuits that are truly robust with respect to genetic variations and that have a greater level of phenotypic variability and evolvability. Overall our results indicate that the need to cope with operation faults leads to the selection of circuits that have a greater probability to generate better circuits as a result of genetic variation with respect to a control condition in which circuits are not subjected to faults. PMID:27409589

  4. Generic, scalable and decentralized fault detection for robot swarms.

    PubMed

    Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.

  5. Generic, scalable and decentralized fault detection for robot swarms

    PubMed Central

    Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756

  6. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  7. Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.

    PubMed

    Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun

    2017-10-03

    This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.

  8. Robust adaptive fault-tolerant control for leader-follower flocking of uncertain multi-agent systems with actuator failure.

    PubMed

    Yazdani, Sahar; Haeri, Mohammad

    2017-11-01

    In this work, we study the flocking problem of multi-agent systems with uncertain dynamics subject to actuator failure and external disturbances. By considering some standard assumptions, we propose a robust adaptive fault tolerant protocol for compensating of the actuator bias fault, the partial loss of actuator effectiveness fault, the model uncertainties, and external disturbances. Under the designed protocol, velocity convergence of agents to that of virtual leader is guaranteed while the connectivity preservation of network and collision avoidance among agents are ensured as well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  10. Active Fault Tolerant Control for Ultrasonic Piezoelectric Motor

    NASA Astrophysics Data System (ADS)

    Boukhnifer, Moussa

    2012-07-01

    Ultrasonic piezoelectric motor technology is an important system component in integrated mechatronics devices working on extreme operating conditions. Due to these constraints, robustness and performance of the control interfaces should be taken into account in the motor design. In this paper, we apply a new architecture for a fault tolerant control using Youla parameterization for an ultrasonic piezoelectric motor. The distinguished feature of proposed controller architecture is that it shows structurally how the controller design for performance and robustness may be done separately which has the potential to overcome the conflict between performance and robustness in the traditional feedback framework. A fault tolerant control architecture includes two parts: one part for performance and the other part for robustness. The controller design works in such a way that the feedback control system will be solely controlled by the proportional plus double-integral PI2 performance controller for a nominal model without disturbances and H∞ robustification controller will only be activated in the presence of the uncertainties or an external disturbances. The simulation results demonstrate the effectiveness of the proposed fault tolerant control architecture.

  11. Robust Fault Diagnosis in Electric Drives Using Machine Learning

    DTIC Science & Technology

    2004-09-08

    detection of fault conditions of the inverter. A machine learning framework is developed to systematically select torque-speed domain operation points...were used to generate various fault condition data for machine learning . The technique is viable for accurate, reliable and fast fault detection in electric drives.

  12. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method.

    PubMed

    Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong

    2015-10-23

    The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments.

  14. Online Fault Detection of Permanent Magnet Demagnetization for IPMSMs by Nonsingular Fast Terminal-Sliding-Mode Observer

    PubMed Central

    Zhao, Kai-Hui; Chen, Te-Fang; Zhang, Chang-Fan; He, Jing; Huang, Gang

    2014-01-01

    To prevent irreversible demagnetization of a permanent magnet (PM) for interior permanent magnet synchronous motors (IPMSMs) by flux-weakening control, a robust PM flux-linkage nonsingular fast terminal-sliding-mode observer (NFTSMO) is proposed to detect demagnetization faults. First, the IPMSM mathematical model of demagnetization is presented. Second, the construction of the NFTSMO to estimate PM demagnetization faults in IPMSM is described, and a proof of observer stability is given. The fault decision criteria and fault-processing method are also presented. Finally, the proposed scheme was simulated using MATLAB/Simulink and implemented on the RT-LAB platform. A number of robustness tests have been carried out. The scheme shows good performance in spite of speed fluctuations, torque ripples and the uncertainties of stator resistance. PMID:25490582

  15. Online fault detection of permanent magnet demagnetization for IPMSMs by nonsingular fast terminal-sliding-mode observer.

    PubMed

    Zhao, Kai-Hui; Chen, Te-Fang; Zhang, Chang-Fan; He, Jing; Huang, Gang

    2014-12-05

    To prevent irreversible demagnetization of a permanent magnet (PM) for interior permanent magnet synchronous motors (IPMSMs) by flux-weakening control, a robust PM flux-linkage nonsingular fast terminal-sliding-mode observer (NFTSMO) is proposed to detect demagnetization faults. First, the IPMSM mathematical model of demagnetization is presented. Second, the construction of the NFTSMO to estimate PM demagnetization faults in IPMSM is described, and a proof of observer stability is given. The fault decision criteria and fault-processing method are also presented. Finally, the proposed scheme was simulated using MATLAB/Simulink and implemented on the RT-LAB platform. A number of robustness tests have been carried out. The scheme shows good performance in spite of speed fluctuations, torque ripples and the uncertainties of stator resistance.

  16. Model-based design and experimental verification of a monitoring concept for an active-active electromechanical aileron actuation system

    NASA Astrophysics Data System (ADS)

    Arriola, David; Thielecke, Frank

    2017-09-01

    Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.

  17. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook

    2014-09-01

    Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.

  18. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  19. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    PubMed

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  20. Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method

    PubMed Central

    Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong

    2015-01-01

    The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments. PMID:26512668

  1. The reflection of evolving bearing faults in the stator current's extended park vector approach for induction machines

    NASA Astrophysics Data System (ADS)

    Corne, Bram; Vervisch, Bram; Derammelaere, Stijn; Knockaert, Jos; Desmet, Jan

    2018-07-01

    Stator current analysis has the potential of becoming the most cost-effective condition monitoring technology regarding electric rotating machinery. Since both electrical and mechanical faults are detected by inexpensive and robust current-sensors, measuring current is advantageous on other techniques such as vibration, acoustic or temperature analysis. However, this technology is struggling to breach into the market of condition monitoring as the electrical interpretation of mechanical machine-problems is highly complicated. Recently, the authors built a test-rig which facilitates the emulation of several representative mechanical faults on an 11 kW induction machine with high accuracy and reproducibility. Operating this test-rig, the stator current of the induction machine under test can be analyzed while mechanical faults are emulated. Furthermore, while emulating, the fault-severity can be manipulated adaptively under controllable environmental conditions. This creates the opportunity of examining the relation between the magnitude of the well-known current fault components and the corresponding fault-severity. This paper presents the emulation of evolving bearing faults and their reflection in the Extended Park Vector Approach for the 11 kW induction machine under test. The results confirm the strong relation between the bearing faults and the stator current fault components in both identification and fault-severity. Conclusively, stator current analysis increases reliability in the application as a complete, robust, on-line condition monitoring technology.

  2. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    PubMed

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  3. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    PubMed Central

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  4. Second-order sliding mode control for DFIG-based wind turbines fault ride-through capability enhancement.

    PubMed

    Benbouzid, Mohamed; Beltran, Brice; Amirat, Yassine; Yao, Gang; Han, Jingang; Mangel, Hervé

    2014-05-01

    This paper deals with the fault ride-through capability assessment of a doubly fed induction generator-based wind turbine using a high-order sliding mode control. Indeed, it has been recently suggested that sliding mode control is a solution of choice to the fault ride-through problem. In this context, this paper proposes a second-order sliding mode as an improved solution that handle the classical sliding mode chattering problem. Indeed, the main and attractive features of high-order sliding modes are robustness against external disturbances, the grids faults in particular, and chattering-free behavior (no extra mechanical stress on the wind turbine drive train). Simulations using the NREL FAST code on a 1.5-MW wind turbine are carried out to evaluate ride-through performance of the proposed high-order sliding mode control strategy in case of grid frequency variations and unbalanced voltage sags. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Late Quaternary offset of alluvial fan surfaces along the Central Sierra Madre Fault, southern California

    USGS Publications Warehouse

    Burgette, Reed J.; Hanson, Austin; Scharer, Katherine M.; Midttun, Nikolas

    2016-01-01

    The Sierra Madre Fault is a reverse fault system along the southern flank of the San Gabriel Mountains near Los Angeles, California. This study focuses on the Central Sierra Madre Fault (CSMF) in an effort to provide numeric dating on surfaces with ages previously estimated from soil development alone. We have refined previous geomorphic mapping conducted in the western portion of the CSMF near Pasadena, CA, with the aid of new lidar data. This progress report focuses on our geochronology strategy employed in collecting samples and interpreting data to determine a robust suite of terrace surface ages. Sample sites for terrestrial cosmogenic nuclide and luminescence dating techniques were selected to be redundant and to be validated through relative geomorphic relationships between inset terrace levels. Additional sample sites were selected to evaluate the post-abandonment histories of terrace surfaces. We will combine lidar-derived displacement data with surface ages to estimate slip rates for the CSMF.

  6. Topographically driven groundwater flow and the San Andreas heat flow paradox revisited

    USGS Publications Warehouse

    Saffer, D.M.; Bekins, B.A.; Hickman, S.

    2003-01-01

    Evidence for a weak San Andreas Fault includes (1) borehole heat flow measurements that show no evidence for a frictionally generated heat flow anomaly and (2) the inferred orientation of ??1 nearly perpendicular to the fault trace. Interpretations of the stress orientation data remain controversial, at least in close proximity to the fault, leading some researchers to hypothesize that the San Andreas Fault is, in fact, strong and that its thermal signature may be removed or redistributed by topographically driven groundwater flow in areas of rugged topography, such as typify the San Andreas Fault system. To evaluate this scenario, we use a steady state, two-dimensional model of coupled heat and fluid flow within cross sections oriented perpendicular to the fault and to the primary regional topography. Our results show that existing heat flow data near Parkfield, California, do not readily discriminate between the expected thermal signature of a strong fault and that of a weak fault. In contrast, for a wide range of groundwater flow scenarios in the Mojave Desert, models that include frictional heat generation along a strong fault are inconsistent with existing heat flow data, suggesting that the San Andreas Fault at this location is indeed weak. In both areas, comparison of modeling results and heat flow data suggest that advective redistribution of heat is minimal. The robust results for the Mojave region demonstrate that topographically driven groundwater flow, at least in two dimensions, is inadequate to obscure the frictionally generated heat flow anomaly from a strong fault. However, our results do not preclude the possibility of transient advective heat transport associated with earthquakes.

  7. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  8. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  9. Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Khong, Thuan H.; Shin, Jong-Yeob

    2007-01-01

    This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.

  10. Robust fault-tolerant tracking control design for spacecraft under control input saturation.

    PubMed

    Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini

    2014-07-01

    In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Prescribed-performance fault-tolerant control for feedback linearisable systems with an aircraft application

    NASA Astrophysics Data System (ADS)

    Gao, Gang; Wang, Jinzhi; Wang, Xianghua

    2017-05-01

    This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.

  12. ASCS online fault detection and isolation based on an improved MPCA

    NASA Astrophysics Data System (ADS)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  13. Robust dead reckoning system for mobile robots based on particle filter and raw range scan.

    PubMed

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-09-04

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method.

  14. Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan

    PubMed Central

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-01-01

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

  15. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  16. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  17. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  18. Fault Detection of Rotating Machinery using the Spectral Distribution Function

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    1997-01-01

    The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.

  19. Development and Evaluation of Fault-Tolerant Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Song, Yong D.; Gupta, Kajal (Technical Monitor)

    2004-01-01

    The research is concerned with developing a new approach to enhancing fault tolerance of flight control systems. The original motivation for fault-tolerant control comes from the need for safe operation of control elements (e.g. actuators) in the event of hardware failures in high reliability systems. One such example is modem space vehicle subjected to actuator/sensor impairments. A major task in flight control is to revise the control policy to balance impairment detectability and to achieve sufficient robustness. This involves careful selection of types and parameters of the controllers and the impairment detecting filters used. It also involves a decision, upon the identification of some failures, on whether and how a control reconfiguration should take place in order to maintain a certain system performance level. In this project new flight dynamic model under uncertain flight conditions is considered, in which the effects of both ramp and jump faults are reflected. Stabilization algorithms based on neural network and adaptive method are derived. The control algorithms are shown to be effective in dealing with uncertain dynamics due to external disturbances and unpredictable faults. The overall strategy is easy to set up and the computation involved is much less as compared with other strategies. Computer simulation software is developed. A serious of simulation studies have been conducted with varying flight conditions.

  20. Fault Accommodation in Control of Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.; Lim, Kyong B.

    1998-01-01

    New synthesis techniques for the design of fault accommodating controllers for flexible systems are developed. Three robust control design strategies, static dissipative, dynamic dissipative and mu-synthesis, are used in the approach. The approach provides techniques for designing controllers that maximize, in some sense, the tolerance of the closed-loop system against faults in actuators and sensors, while guaranteeing performance robustness at a specified performance level, measured in terms of the proximity of the closed-loop poles to the imaginary axis (the degree of stability). For dissipative control designs, nonlinear programming is employed to synthesize the controllers, whereas in mu-synthesis, the traditional D-K iteration is used. To demonstrate the feasibility of the proposed techniques, they are applied to the control design of a structural model of a flexible laboratory test structure.

  1. Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui

    2017-10-01

    In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.

  2. Enhanced Bank of Kalman Filters Developed and Demonstrated for In-Flight Aircraft Engine Sensor Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2005-01-01

    In-flight sensor fault detection and isolation (FDI) is critical to maintaining reliable engine operation during flight. The aircraft engine control system, which computes control commands on the basis of sensor measurements, operates the propulsion systems at the demanded conditions. Any undetected sensor faults, therefore, may cause the control system to drive the engine into an undesirable operating condition. It is critical to detect and isolate failed sensors as soon as possible so that such scenarios can be avoided. A challenging issue in developing reliable sensor FDI systems is to make them robust to changes in engine operating characteristics due to degradation with usage and other faults that can occur during flight. A sensor FDI system that cannot appropriately account for such scenarios may result in false alarms, missed detections, or misclassifications when such faults do occur. To address this issue, an enhanced bank of Kalman filters was developed, and its performance and robustness were demonstrated in a simulation environment. The bank of filters is composed of m + 1 Kalman filters, where m is the number of sensors being used by the control system and, thus, in need of monitoring. Each Kalman filter is designed on the basis of a unique fault hypothesis so that it will be able to maintain its performance if a particular fault scenario, hypothesized by that particular filter, takes place.

  3. Fractional-order active fault-tolerant force-position controller design for the legged robots using saturated actuator with unknown bias and gain degradation

    NASA Astrophysics Data System (ADS)

    Farid, Yousef; Majd, Vahid Johari; Ehsani-Seresht, Abbas

    2018-05-01

    In this paper, a novel fault accommodation strategy is proposed for the legged robots subject to the actuator faults including actuation bias and effective gain degradation as well as the actuator saturation. First, the combined dynamics of two coupled subsystems consisting of the dynamics of the legs subsystem and the body subsystem are developed. Then, the interaction of the robot with the environment is formulated as the contact force optimization problem with equality and inequality constraints. The desired force is obtained by a dynamic model. A robust super twisting fault estimator is proposed to precisely estimate the defective torque amplitude of the faulty actuator in finite time. Defining a novel fractional sliding surface, a fractional nonsingular terminal sliding mode control law is developed. Moreover, by introducing a suitable auxiliary system and using its state vector in the designed controller, the proposed fault-tolerant control (FTC) scheme guarantees the finite-time stability of the closed-loop control system. The robustness and finite-time convergence of the proposed control law is established using the Lyapunov stability theory. Finally, numerical simulations are performed on a quadruped robot to demonstrate the stable walking of the robot with and without actuator faults, and actuator saturation constraints, and the results are compared to results with an integer order fault-tolerant controller.

  4. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  5. Laboratory scale micro-seismic monitoring of rock faulting and injection-induced fault reactivation

    NASA Astrophysics Data System (ADS)

    Sarout, J.; Dautriat, J.; Esteban, L.; Lumley, D. E.; King, A.

    2017-12-01

    The South West Hub CCS project in Western Australia aims to evaluate the feasibility and impact of geosequestration of CO2 in the Lesueur sandstone formation. Part of this evaluation focuses on the feasibility and design of a robust passive seismic monitoring array. Micro-seismicity monitoring can be used to image the injected CO2plume, or any geomechanical fracture/fault activity; and thus serve as an early warning system by measuring low-level (unfelt) seismicity that may precede potentially larger (felt) earthquakes. This paper describes laboratory deformation experiments replicating typical field scenarios of fluid injection in faulted reservoirs. Two pairs of cylindrical core specimens were recovered from the Harvey-1 well at depths of 1924 m and 2508 m. In each specimen a fault is first generated at the in situ stress, pore pressure and temperature by increasing the vertical stress beyond the peak in a triaxial stress vessel at CSIRO's Geomechanics & Geophysics Lab. The faulted specimen is then stabilized by decreasing the vertical stress. The freshly formed fault is subsequently reactivated by brine injection and increase of the pore pressure until slip occurs again. This second slip event is then controlled in displacement and allowed to develop for a few millimeters. The micro-seismic (MS) response of the rock during the initial fracturing and subsequent reactivation is monitored using an array of 16 ultrasonic sensors attached to the specimen's surface. The recorded MS events are relocated in space and time, and correlate well with the 3D X-ray CT images of the specimen obtained post-mortem. The time evolution of the structural changes induced within the triaxial stress vessel is therefore reliably inferred. The recorded MS activity shows that, as expected, the increase of the vertical stress beyond the peak led to an inclined shear fault. The injection of fluid and the resulting increase in pore pressure led first to a reactivation of the pre-existing fault. However, with increasing slip, a second conjugate fault progressively appeared, which ultimately accommodated all of the imposed vertical displacement. The inferred structural changes resemble fault branching and dynamic slip transfer processes seen in large-scale geology. This project was funded by the ANLEC R&D in partnership with the WA Government.

  6. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  7. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  8. The hydraulic structure of the Gole Larghe Fault Zone (Italian Southern Alps) through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2017-12-01

    The 600 m-thick, strike slip Gole Larghe Fault Zone (GLFZ) experienced several hundred seismic slip events at c. 8 km depth, well-documented by numerous pseudotachylytes, was then exhumed and is now exposed in beautiful and very continuous outcrops. The fault zone was also characterized by hydrous fluid flow during the seismic cycle, demonstrated by alteration halos and precipitation of hydrothermal minerals in veins and cataclasites. We have characterized the GLFZ with > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed us obtaining 3D Discrete Fracture Network (DFN) models, based on robust probability density functions for parameters of fault and fracture sets, and simulating the fault zone hydraulic properties. In addition, the correlation between evidences of fluid flow and the fault/fracture network parameters have been studied with a geostatistical approach, allowing generating more realistic time-varying permeability models of the fault zone. Based on this dataset, we have developed a FEM hydraulic model of the GLFZ for a period of some tens of years, covering one seismic event and a postseismic period. The higher permeability is attained in the syn- to early post-seismic period, when fractures are (re)opened by off-fault deformation, then permeability decreases in the postseismic due to fracture sealing. The flow model yields a flow pattern consistent with the observed alteration/mineralization pattern and a marked channelling of fluid flow in the inner part of the fault zone, due to permeability anisotropy related to the spatial arrangement of different fracture sets. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, and the heterogeneity and evolution of mechanical parameters due to fluid-rock interaction.

  9. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    PubMed Central

    Li, Pengfei; Jiang, Yongying; Xiang, Jiawei

    2014-01-01

    To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC) is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples. PMID:24688361

  11. The seismogenic Gole Larghe Fault Zone (Italian Southern Alps): quantitative 3D characterization of the fault/fracture network, mapping of evidences of fluid-rock interaction, and modelling of the hydraulic structure through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2016-12-01

    The Gole Larghe Fault Zone (GLFZ) was exhumed from 8 km depth, where it was characterized by seismic activity (pseudotachylytes) and hydrous fluid flow (alteration halos and precipitation of hydrothermal minerals in veins and cataclasites). Thanks to glacier-polished outcrops exposing the 400 m-thick fault zone over a continuous area > 1.5 km2, the fault zone architecture has been quantitatively described with an unprecedented detail, providing a rich dataset to generate 3D Discrete Fracture Network (DFN) models and simulate the fault zone hydraulic properties. The fault and fracture network has been characterized combining > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed obtaining robust probability density functions for parameters of fault and fracture sets: orientation, fracture intensity and density, spacing, persistency, length, thickness/aperture, termination. The spatial distribution of fractures (random, clustered, anticlustered…) has been characterized with geostatistics. Evidences of fluid/rock interaction (alteration halos, hydrothermal veins, etc.) have been mapped on the same outcrops, revealing sectors of the fault zone strongly impacted, vs. completely unaffected, by fluid/rock interaction, separated by convolute infiltration fronts. Field and microstructural evidence revealed that higher permeability was obtained in the syn- to early post-seismic period, when fractures were (re)opened by off-fault deformation. We have developed a parametric hydraulic model of the GLFZ and calibrated it, varying the fraction of faults/fractures that were open in the post-seismic, with the goal of obtaining realistic fluid flow and permeability values, and a flow pattern consistent with the observed alteration/mineralization pattern. The fraction of open fractures is very close to the percolation threshold of the DFN, and the permeability tensor is strongly anisotropic, resulting in a marked channelling of fluid flow in the inner part of the fault zone. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, a fundamental mechanical parameter in the energy balance of earthquakes.

  12. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research including identification of FM architectures, visibility observations, and methods utilized for VVIVV.

  13. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  14. Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer

    PubMed Central

    Kim, Jong-Myon

    2018-01-01

    An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing’s vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively. PMID:29642459

  15. Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer.

    PubMed

    Piltan, Farzin; Kim, Jong-Myon

    2018-04-07

    An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing's vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively.

  16. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    PubMed

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network

    PubMed Central

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-01-01

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638

  19. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.

    PubMed

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-07-04

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.

  20. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    PubMed

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  2. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    NASA Astrophysics Data System (ADS)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  3. Recent deformation on the San Diego Trough and San Pedro Basin fault systems, offshore Southern California: Assessing evidence for fault system connectivity.

    NASA Astrophysics Data System (ADS)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.

    2016-12-01

    The seismic hazard posed by offshore faults for coastal communities in Southern California is poorly understood and may be considerable, especially when these communities are located near long faults that have the ability to produce large earthquakes. The San Diego Trough fault (SDTF) and San Pedro Basin fault (SPBF) systems are active northwest striking, right-lateral faults in the Inner California Borderland that extend offshore between San Diego and Los Angeles. Recent work shows that the SDTF slip rate accounts for 25% of the 6-8 mm/yr of deformation accommodated by the offshore fault network, and seismic reflection data suggest that these two fault zones may be one continuous structure. Here, we use recently acquired CHIRP, high-resolution multichannel seismic (MCS) reflection, and multibeam bathymetric data in combination with USGS and industry MCS profiles to characterize recent deformation on the SDTF and SPBF zones and to evaluate the potential for an end-to-end rupture that spans both fault systems. The SDTF offsets young sediments at the seafloor for 130 km between the US/Mexico border and Avalon Knoll. The northern SPBF has robust geomorphic expression and offsets the seafloor in the Santa Monica Basin. The southern SPBF lies within a 25-km gap between high-resolution MCS surveys. Although there does appear to be a through-going fault at depth in industry MCS profiles, the low vertical resolution of these data inhibits our ability to confirm recent slip on the southern SPBF. Empirical scaling relationships indicate that a 200-km-long rupture of the SDTF and its southern extension, the Bahia Soledad fault, could produce a M7.7 earthquake. If the SDTF and the SPBF are linked, the length of the combined fault increases to >270 km. This may allow ruptures initiating on the SDTF to propagate within 25 km of the Los Angeles Basin. At present, the paleoseismic histories of the faults are unknown. We present new observations from CHIRP and coring surveys at three locations where thin lenses of sediment mantle the SDTF, providing the ideal sedimentary record to constrain the timing of the most recent event. Characterizing the paleoseismic histories is a key step toward defining the extent and variability of past ruptures, which in turn, will improve maximum magnitude estimates for the SDTF and SPBF systems.

  4. Detecting of transient vibration signatures using an improved fast spatial-spectral ensemble kurtosis kurtogram and its applications to mechanical signature analysis of short duration data from rotating machinery

    NASA Astrophysics Data System (ADS)

    Chen, BinQiang; Zhang, ZhouSuo; Zi, YanYang; He, ZhengJia; Sun, Chuang

    2013-10-01

    Detecting transient vibration signatures is of vital importance for vibration-based condition monitoring and fault detection of the rotating machinery. However, raw mechanical signals collected by vibration sensors are generally mixtures of physical vibrations of the multiple mechanical components installed in the examined machinery. Fault-generated incipient vibration signatures masked by interfering contents are difficult to be identified. The fast kurtogram (FK) is a concise and smart gadget for characterizing these vibration features. The multi-rate filter-bank (MRFB) and the spectral kurtosis (SK) indicator of the FK are less powerful when strong interfering vibration contents exist, especially when the FK are applied to vibration signals of short duration. It is encountered that the impulsive interfering contents not authentically induced by mechanical faults complicate the optimal analyzing process and lead to incorrect choosing of the optimal analysis subband, therefore the original FK may leave out the essential fault signatures. To enhance the analyzing performance of FK for industrial applications, an improved version of fast kurtogram, named as "fast spatial-spectral ensemble kurtosis kurtogram", is presented. In the proposed technique, discrete quasi-analytic wavelet tight frame (QAWTF) expansion methods are incorporated as the detection filters. The QAWTF, constructed based on dual tree complex wavelet transform, possesses better vibration transient signature extracting ability and enhanced time-frequency localizability compared with conventional wavelet packet transforms (WPTs). Moreover, in the constructed QAWTF, a non-dyadic ensemble wavelet subband generating strategy is put forward to produce extra wavelet subbands that are capable of identifying fault features located in transition-band of WPT. On the other hand, an enhanced signal impulsiveness evaluating indicator, named "spatial-spectral ensemble kurtosis" (SSEK), is put forward and utilized as the quantitative measure to select optimal analyzing parameters. The SSEK indicator is robuster in evaluating the impulsiveness intensity of vibration signals due to its better suppressing ability of Gaussian noise, harmonics and sporadic impulsive shocks. Numerical validations, an experimental test and two engineering applications were used to verify the effectiveness of the proposed technique. The analyzing results of the numerical validations, experimental tests and engineering applications demonstrate that the proposed technique possesses robuster transient vibration content detecting performance in comparison with the original FK and the WPT-based FK method, especially when they are applied to the processing of vibration signals of relative limited duration.

  5. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  6. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  7. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  8. Fault detection in mechanical systems with friction phenomena: an online neural approximation approach.

    PubMed

    Papadimitropoulos, Adam; Rovithakis, George A; Parisini, Thomas

    2007-07-01

    In this paper, the problem of fault detection in mechanical systems performing linear motion, under the action of friction phenomena is addressed. The friction effects are modeled through the dynamic LuGre model. The proposed architecture is built upon an online neural network (NN) approximator, which requires only system's position and velocity. The friction internal state is not assumed to be available for measurement. The neural fault detection methodology is analyzed with respect to its robustness and sensitivity properties. Rigorous fault detectability conditions and upper bounds for the detection time are also derived. Extensive simulation results showing the effectiveness of the proposed methodology are provided, including a real case study on an industrial actuator.

  9. A real-time, practical sensor fault-tolerant module for robust EMG pattern recognition.

    PubMed

    Zhang, Xiaorong; Huang, He

    2015-02-19

    Unreliability of surface EMG recordings over time is a challenge for applying the EMG pattern recognition (PR)-controlled prostheses in clinical practice. Our previous study proposed a sensor fault-tolerant module (SFTM) by utilizing redundant information in multiple EMG signals. The SFTM consists of multiple sensor fault detectors and a self-recovery mechanism that can identify anomaly in EMG signals and remove the recordings of the disturbed signals from the input of the pattern classifier to recover the PR performance. While the proposed SFTM has shown great promise, the previous design is impractical. A practical SFTM has to be fast enough, lightweight, automatic, and robust under different conditions with or without disturbances. This paper presented a real-time, practical SFTM towards robust EMG PR. A novel fast LDA retraining algorithm and a fully automatic sensor fault detector based on outlier detection were developed, which allowed the SFTM to promptly detect disturbances and recover the PR performance immediately. These components of SFTM were then integrated with the EMG PR module and tested on five able-bodied subjects and a transradial amputee in real-time for classifying multiple hand and wrist motions under different conditions with different disturbance types and levels. The proposed fast LDA retraining algorithm significantly shortened the retraining time from nearly 1 s to less than 4 ms when tested on the embedded system prototype, which demonstrated the feasibility of a nearly "zero-delay" SFTM that is imperceptible to the users. The results of the real-time tests suggested that the SFTM was able to handle different types of disturbances investigated in this study and significantly improve the classification performance when one or multiple EMG signals were disturbed. In addition, the SFTM could also maintain the system's classification performance when there was no disturbance. This paper presented a real-time, lightweight, and automatic SFTM, which paved the way for reliable and robust EMG PR for prosthesis control.

  10. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  11. Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform

    PubMed Central

    Tang, Guiji; Tian, Tian; Zhou, Chong

    2018-01-01

    When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013

  12. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  13. Stress Field Variation after the 2001 Skyros Earthquake, Greece, Derived from Seismicity Rate Changes

    NASA Astrophysics Data System (ADS)

    Leptokaropoulos, K.; Papadimitriou, E.; Orlecka-Sikora, B.; Karakostas, V.

    2012-04-01

    The spatial variation of the stress field (ΔCFF) after the 2001 strong (Mw=6.4) Skyros earthquake in North Aegean Sea, Greece, is investigated in association with the changes of earthquake production rates. A detailed slip model is considered in which the causative fault is consisted of several sub-faults with different coseismic slip onto each one of them. First the spatial distribution of aftershock productivity is compared with the static stress changes due to the coseismic slip. Calculations of ΔCFF are performed at different depths inside the seismogenic layer, defined from the vertical distribution of the aftershocks. Seismicity rates of the smaller magnitude events with M≥Mc for different time increments before and after the main shock are then derived from the application of a Probability Density Function (PDF). These rates are computed by spatially smoothing the seismicity and for this purpose a normal grid of rectangular cells is superimposed onto the area and the PDF determines seismicity rate values at the center of each cell. The differences between the earthquake occurrence rates before and after the main shock are compared and used as input data in a stress inversion algorithm based upon the Rate/State dependent friction concept in order to provide an independent estimation of stress changes. This model incorporates the physical properties of the fault zones (characteristic relaxation time, fault constitutive parameters, effective friction coefficient) with a probabilistic estimation of the spatial distribution of seismicity rates, derived from the application of the PDF. The stress patterns derived from the previously mentioned approaches are compared and the quantitative correlation between the respective results is accomplished by the evaluation of Pearson linear correlation coefficient and its confidence intervals to quantify their significance. Different assumptions and combinations of the physical and statistical parameters are tested for the model performance and robustness to be evaluated. Simulations will provide a measure of how robust is the use of seismicity rate changes as a stress meter for both positive and negative stress steps. This work was partially prepared within the framework of the research projects No. N N307234937 and 3935/B/T02/2010/39 financed by the Ministry of Education and Science of Poland during the period 2009 to 2011 and 2010 to 2012, respectively.

  14. Inferring Fault Frictional and Reservoir Hydraulic Properties From Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Jagalur-Mohan, Jayanth; Jha, Birendra; Wang, Zheng; Juanes, Ruben; Marzouk, Youssef

    2018-02-01

    Characterizing the rheological properties of faults and the evolution of fault friction during seismic slip are fundamental problems in geology and seismology. Recent increases in the frequency of induced earthquakes have intensified the need for robust methods to estimate fault properties. Here we present a novel approach for estimation of aquifer and fault properties, which combines coupled multiphysics simulation of injection-induced seismicity with adaptive surrogate-based Bayesian inversion. In a synthetic 2-D model, we use aquifer pressure, ground displacements, and fault slip measurements during fluid injection to estimate the dynamic fault friction, the critical slip distance, and the aquifer permeability. Our forward model allows us to observe nonmonotonic evolutions of shear traction and slip on the fault resulting from the interplay of several physical mechanisms, including injection-induced aquifer expansion, stress transfer along the fault, and slip-induced stress relaxation. This interplay provides the basis for a successful joint inversion of induced seismicity, yielding well-informed Bayesian posterior distributions of dynamic friction and critical slip. We uncover an inverse relationship between dynamic friction and critical slip distance, which is in agreement with the small dynamic friction and large critical slip reported during seismicity on mature faults.

  15. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    PubMed Central

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  16. Quantitative fault tolerant control design for a hydraulic actuator with a leaking piston seal

    NASA Astrophysics Data System (ADS)

    Karpenko, Mark

    Hydraulic actuators are complex fluid power devices whose performance can be degraded in the presence of system faults. In this thesis a linear, fixed-gain, fault tolerant controller is designed that can maintain the positioning performance of an electrohydraulic actuator operating under load with a leaking piston seal and in the presence of parametric uncertainties. Developing a control system tolerant to this class of internal leakage fault is important since a leaking piston seal can be difficult to detect, unless the actuator is disassembled. The designed fault tolerant control law is of low-order, uses only the actuator position as feedback, and can: (i) accommodate nonlinearities in the hydraulic functions, (ii) maintain robustness against typical uncertainties in the hydraulic system parameters, and (iii) keep the positioning performance of the actuator within prescribed tolerances despite an internal leakage fault that can bypass up to 40% of the rated servovalve flow across the actuator piston. Experimental tests verify the functionality of the fault tolerant control under normal and faulty operating conditions. The fault tolerant controller is synthesized based on linear time-invariant equivalent (LTIE) models of the hydraulic actuator using the quantitative feedback theory (QFT) design technique. A numerical approach for identifying LTIE frequency response functions of hydraulic actuators from acceptable input-output responses is developed so that linearizing the hydraulic functions can be avoided. The proposed approach can properly identify the features of the hydraulic actuator frequency response that are important for control system design and requires no prior knowledge about the asymptotic behavior or structure of the LTIE transfer functions. A distributed hardware-in-the-loop (HIL) simulation architecture is constructed that enables the performance of the proposed fault tolerant control law to be further substantiated, under realistic operating conditions. Using the HIL framework, the fault tolerant hydraulic actuator is operated as a flight control actuator against the real-time numerical simulation of a high-performance jet aircraft. A robust electrohydraulic loading system is also designed using QFT so that the in-flight aerodynamic load can be experimentally replicated. The results of the HIL experiments show that using the fault tolerant controller to compensate the internal leakage fault at the actuator level can benefit the flight performance of the airplane.

  17. Bearing damage assessment using Jensen-Rényi Divergence based on EEMD

    NASA Astrophysics Data System (ADS)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2017-03-01

    An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.

  18. Estimates of Cutoff Depths of Seismogenic Layer in Kanto Region from the High-Resolution Relocated Earthquake Catalog

    NASA Astrophysics Data System (ADS)

    Takeda, T.; Yano, T. E.; Shiomi, K.

    2013-12-01

    The highly-developed active fault evaluation is necessary particularly at the Kanto metropolitan area, where multiple major active fault zones exist. The cutoff depth of active faults is one of important parameters since it is a good indicator to define fault dimensions and hence its maximum expected magnitude. The depth is normally estimated from microseismicity, thermal structure, and depths of Curie point and Conrad discontinuity. For instance, Omuralieva et al. (2012) has estimated the cutoff depths of the whole Japan by creating a 3-D relocated hypocenter catalog. However its spatial resolution could be insufficient for the robustness of the active faults evaluation since precision within 15 km that is comparable to the minimum evaluated fault size is preferred. Therefore the spatial resolution of the earthquake catalog to estimate the cutoff depth is required to be smaller than 15 km. This year we launched the Japan Unified hIgh-resolution relocated Catalog for Earthquakes (JUICE) Project (Yano et al., this fall meeting), of which objective is to create precise and reliable earthquake catalog for all of Japan, using waveform cross-correlation data and Double-Difference relocation method (Waldhauser and Ellsworth, 2000). This catalog has higher precision of hypocenter determination than the routine one. In this study, we estimate high-resolution cutoff depths of seismogenic layer using this catalog of the Kanto region where preliminary JUICE analysis has been already done. D90, the cutoff depths which contain 90% of the occuring earthquake is often used as a reference to understand the seismogenic layer. The reason of choosing 90% is because it relies on uncertainties based on the amount of depth errors of hypocenters.. In this study we estimate D95 because more precise and reliable catalog is now available by the JUICE project. First we generate 10 km equally spaced grid in our study area. Second we pick hypocenters within a radius of 10 km from each grid point and arrange into hypocenter groups. Finally we estimate D95 from the hypocenter groups at each grid point. During the analysis we use three conditions; (1) the depths of the hypocenters we used are less than 25 km; (2) the minimum number of the hypocenter group is 25; and (3) low frequency earthquakes are excluded. Our estimate of D95 shows undulated and fine features, such as having a different profile along the same fault. This can be seen at two major fault zones: (1) Tachikawa fault zone, and (2) the northwest marginal fault zone of the Kanto basin. The D95 gets deeper from northwest to southwest along these fault zones, , suggesting that the constant cutoff depth cannot be used even along the same fault zone. One of patters of our D95 shows deeper in the south Kanto region. The reason for this pattern could be that hypocenters we used in this study may be contaminated by seismicity near the plate boundary between the Philippine Sea plate and the Eurasian plate. Therefore we should carefully interpret D95 in the south Kanto.

  19. Sensor fault diagnosis of singular delayed LPV systems with inexact parameters: an uncertain system approach

    NASA Astrophysics Data System (ADS)

    Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc

    2018-01-01

    In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.

  20. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    PubMed

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  2. Fault-tolerant symmetrically-private information retrieval

    NASA Astrophysics Data System (ADS)

    Wang, Tian-Yin; Cai, Xiao-Qiu; Zhang, Rui-Ling

    2016-08-01

    We propose two symmetrically-private information retrieval protocols based on quantum key distribution, which provide a good degree of database and user privacy while being flexible, loss-resistant and easily generalized to a large database similar to the precedent works. Furthermore, one protocol is robust to a collective-dephasing noise, and the other is robust to a collective-rotation noise.

  3. Evaluating Seasonal Deformation in the Vicinity of Active Fault Structures in Central California Using GPS Data

    NASA Astrophysics Data System (ADS)

    Kraner, Meredith L.

    Central California is a tectonically active region in the Western United States, which encompasses segments of both the San Andreas and Calaveras Faults and centers around the town of Parkfield, California. Recently, statistical studies of microseismicity suggest that earthquake rates in this region can vary seasonally. Also, studies using data from modern GPS networks have revealed that crustal deformation can be influenced by seasonal and nontectonic factors, such as hydrological, temperature, and atmospheric loads. Here we analyze eight-years (2008 - 2016) of GPS data and build on this idea by developing a robust seasonal model of dilatational and shear strain in Central California. Using an inversion, we model each GPS time series in our study region to derive seasonal horizontal displacements for each month of the year. These positions are detrended using robust MIDAS velocities, destepped using a Heavyside function, and demeaned to center the time series around zero. The stations we use are carefully chosen using a selection method which allows us to exclude stations located on unstable, heavily subsiding ground and include stations on sturdy bedrock. In building our seasonal strain model, we first filter these monthly seasonal horizontal displacements using a median-spatial filter technique called GPS Imaging to remove outliers and enhance the signal common to multiple stations. We then grid these seasonal horizontal filtered displacements and use them to model our dilatational and shear strain field for each month of the year. We setup our model such that a large portion of the strain in the region is accommodated on or near the San Andreas and Calaveras Faults. We test this setup using two sets of synthetic data and explore how varying the a priori faulting constraints of the on and off-fault standard deviations in the strain tensor affects the output of the model. We additionally extract strain time series for key regions along/near the San Andreas and Calaveras Faults. We find that the most prevalent seasonal strain signal exists in the main creeping section along the San Andreas Fault in Central California. This region, which runs from Parkfield to Bitterwater Valley, shows peaks in contraction (negative dilatation) during the wet period (February/March) and peaks in extension (positive dilatation) during the dry period (August/September). The north transitional creeping section along the San Andreas Fault and the Calaveras Fault displays general similarities with the main creeping section trend. In sharp contrast, seasonality is virtually undetected in the locked section of the San Andreas Fault south of the town of Cholame. Additionally, the southern transitional creeping section shows two distinct patterns. For the most part this region, between Parkfield and Cholame, shows peaks in contraction during the wet period (February/March) and peaks in extension during the dry period (August/September), similar to the main creeping section. However, the segment of the southern transitional creeping section surrounding the town of Cholame opposes this trend with peaks in extension during the wet period and peaks in contraction during the dry period. We postulate several causes for this seasonal signal, which we plan to explore further in future work.

  4. Hybrid routing technique for a fault-tolerant, integrated information network

    NASA Technical Reports Server (NTRS)

    Meredith, B. D.

    1986-01-01

    The evolutionary growth of the space station and the diverse activities onboard are expected to require a hierarchy of integrated, local area networks capable of supporting data, voice, and video communications. In addition, fault-tolerant network operation is necessary to protect communications between critical systems attached to the net and to relieve the valuable human resources onboard the space station of time-critical data system repair tasks. A key issue for the design of the fault-tolerant, integrated network is the development of a robust routing algorithm which dynamically selects the optimum communication paths through the net. A routing technique is described that adapts to topological changes in the network to support fault-tolerant operation and system evolvability.

  5. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  6. Feature Detection in SAR Interferograms With Missing Data Displays Fault Slip Near El Mayor-Cucapah and South Napa Earthquakes

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.; Stough, T.

    2015-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification, demonstrated in 2010, was not robust: for best results, it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Proper treatment of missing data, bursting noise patches, and tiny noise differences at short distances apart from bursts are essential to creating an acceptably reliable method sensitive to small near-surface fractures. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features: hand-crafted intervention will not do. Effective methods of identifying, removing and filling in bad pixels reveal significant features of surface fractures. A rich network of edges (probably fractures and subsidence) in difference images spanning the South Napa earthquake give way to a simple set of postseismically slipping faults. Coseismic El Mayor-Cucapah interferograms compared to post-seismic difference images show nearly disjoint patterns of surface fractures in California's Sonoran Desert; the combined pattern reveals a network of near-perpendicular, probably conjugate faults not mapped before the earthquake. The current algorithms for UAVSAR interferogram edge detections are shown to be effective in difficult environments, including agricultural (Napa, Imperial Valley) and difficult urban areas (Orange County.).

  7. Deterministic and robust generation of single photons from a single quantum dot with 99.5% indistinguishability using adiabatic rapid passage.

    PubMed

    Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2014-11-12

    Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.

  8. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  9. Vibration Sensor-Based Bearing Fault Diagnosis Using Ellipsoid-ARTMAP and Differential Evolution Algorithms

    PubMed Central

    Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao

    2014-01-01

    Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949

  10. Development of a Standardized Methodology for the Use of COSI-Corr Sub-Pixel Image Correlation to Determine Surface Deformation Patterns in Large Magnitude Earthquakes.

    NASA Astrophysics Data System (ADS)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2014-12-01

    Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean deformation width of 183 m and 133 m, respectively. We envisage that correlation results derived from our methodology will provide vital data for near-field deformation patterns and will be of significant use for constraining inversion solutions for fault slip at depth.

  11. Integrated Approach To Design And Analysis Of Systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1993-01-01

    Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.

  12. Real-time closed-loop simulation and upset evaluation of control systems in harsh electromagnetic environments

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1989-01-01

    Digital control systems for applications such as aircraft avionics and multibody systems must maintain adequate control integrity in adverse as well as nominal operating conditions. For example, control systems for advanced aircraft, and especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met regardless of operating conditions. In addition, multibody systems such as robotic manipulators performing critical functions must have control systems capable of robust performance in any operating environment in order to complete the assigned task reliably. Severe operating conditions for electronic control systems can result from electromagnetic disturbances caused by lightning, high energy radio frequency (HERF) transmitters, and nuclear electromagnetic pulses (NEMP). For this reason, techniques must be developed to evaluate the integrity of the control system in adverse operating environments. The most difficult and illusive perturbations to computer-based control systems that can be caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. Upset studies performed to date have not addressed the assessment of fault tolerant systems and do not involve the evaluation of a control system operating in a closed-loop with the plant. A methodology for performing a real-time simulation of the closed-loop dynamics of a fault tolerant control system with a simulated plant operating in an electromagnetically harsh environment is presented. In particular, considerations for performing upset tests on the controller are discussed. Some of these considerations are the generation and coupling of analog signals representative of electromagnetic disturbances to a control system under test, analog data acquisition, and digital data acquisition from fault tolerant systems. In addition, a case study of an upset test methodology for a fault tolerant electromagnetic aircraft engine control system is presented.

  13. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  14. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  15. Fault detection for singular switched linear systems with multiple time-varying delay in finite frequency domain

    NASA Astrophysics Data System (ADS)

    Zhai, Ding; Lu, Anyang; Li, Jinghao; Zhang, Qingling

    2016-10-01

    This paper deals with the problem of the fault detection (FD) for continuous-time singular switched linear systems with multiple time-varying delay. In this paper, the actuator fault is considered. Besides, the systems faults and unknown disturbances are assumed in known frequency domains. Some finite frequency performance indices are initially introduced to design the switched FD filters which ensure that the filtering augmented systems under switching signal with average dwell time are exponentially admissible and guarantee the fault input sensitivity and disturbance robustness. By developing generalised Kalman-Yakubovic-Popov lemma and using Parseval's theorem and Fourier transform, finite frequency delay-dependent sufficient conditions for the existence of such a filter which can guarantee the finite-frequency H- and H∞ performance are derived and formulated in terms of linear matrix inequalities. Four examples are provided to illustrate the effectiveness of the proposed finite frequency method.

  16. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  17. Fault-tolerant control of large space structures using the stable factorization approach

    NASA Technical Reports Server (NTRS)

    Razavi, H. C.; Mehra, R. K.; Vidyasagar, M.

    1986-01-01

    Large space structures are characterized by the following features: they are in general infinite-dimensional systems, and have large numbers of undamped or lightly damped poles. Any attempt to apply linear control theory to large space structures must therefore take into account these features. Phase I consisted of an attempt to apply the recently developed Stable Factorization (SF) design philosophy to problems of large space structures, with particular attention to the aspects of robustness and fault tolerance. The final report on the Phase I effort consists of four sections, each devoted to one task. The first three sections report theoretical results, while the last consists of a design example. Significant results were obtained in all four tasks of the project. More specifically, an innovative approach to order reduction was obtained, stabilizing controller structures for plants with an infinite number of unstable poles were determined under some conditions, conditions for simultaneous stabilizability of an infinite number of plants were explored, and a fault tolerance controller design that stabilizes a flexible structure model was obtained which is robust against one failure condition.

  18. Robust Routing Protocol For Digital Messages

    NASA Technical Reports Server (NTRS)

    Marvit, Maclen

    1994-01-01

    Refinement of ditigal-message-routing protocol increases fault tolerance of polled networks. AbNET-3 is latest of generic AbNET protocols for transmission of messages among computing nodes. AbNET concept described in "Multiple-Ring Digital Communication Network" (NPO-18133). Specifically aimed at increasing fault tolerance of network in broadcast mode, in which one node broadcasts message to and receives responses from all other nodes. Communication in network of computers maintained even when links fail.

  19. UWE-3, in-orbit performance and lessons learned of a modular and flexible satellite bus for future pico-satellite formations

    NASA Astrophysics Data System (ADS)

    Busch, S.; Bangert, P.; Dombrovski, S.; Schilling, K.

    2015-12-01

    Formations of small satellites offer promising perspectives due to improved temporal and spatial coverage and resolution at reasonable costs. The UWE-program addresses in-orbit demonstrations of key technologies to enable formations of cooperating distributed spacecraft at pico-satellite level. In this context, the CubeSat UWE-3 addresses experiments for evaluation of real-time attitude determination and control. UWE-3 introduces also a modular and flexible pico-satellite bus as a robust and extensible base for future missions. Technical objective was a very low power consumption of the COTS-based system, nevertheless providing a robust performance of this miniature satellite by advanced microprocessor redundancy and fault detection, identification and recovery software. This contribution addresses the UWE-3 design and mission results with emphasis on the operational experiences of the attitude determination and control system.

  20. Traffic protection in MPLS networks using an off-line flow optimization model

    NASA Astrophysics Data System (ADS)

    Krzesinski, Anthony E.; Muller, Karen E.

    2002-07-01

    MPLS-based recovery is intended to effect rapid and complete restoration of traffic affected by a fault in an MPLS network. Two MPLS-based recovery models have been proposed: IP re-routing which establishes recovery paths on demand, and protection switching which works with pre-established recovery paths. IP re-routing is robust and frugal since no resources are pre-committed but is inherently slower than protection switching which is intended to offer high reliability to premium services where fault recovery takes place at the 100 ms time scale. We present a model of protection switching in MPLS networks. A variant of the flow deviation method is used to find and capacitate a set of optimal label switched paths. The traffic is routed over a set of working LSPs. Global repair is implemented by reserving a set of pre-established recovery LSPs. An analytic model is used to evaluate the MPLS-based recovery mechanisms in response to bi-directional link failures. A simulation model is used to evaluate the MPLS recovery cycle in terms of the time needed to restore the traffic after a uni-directional link failure. The models are applied to evaluate the effectiveness of protection switching in networks consisting of between 20 and 100 nodes.

  1. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  2. Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.

    PubMed

    Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M

    2017-11-03

    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

  3. Neural-Network-Based Adaptive Decentralized Fault-Tolerant Control for a Class of Interconnected Nonlinear Systems.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2018-01-01

    This paper is concerned with the adaptive decentralized fault-tolerant tracking control problem for a class of uncertain interconnected nonlinear systems with unknown strong interconnections. An algebraic graph theory result is introduced to address the considered interconnections. In addition, to achieve the desirable tracking performance, a neural-network-based robust adaptive decentralized fault-tolerant control (FTC) scheme is given to compensate the actuator faults and system uncertainties. Furthermore, via the Lyapunov analysis method, it is proven that all the signals of the resulting closed-loop system are semiglobally bounded, and the tracking errors of each subsystem exponentially converge to a compact set, whose radius is adjustable by choosing different controller design parameters. Finally, the effectiveness and advantages of the proposed FTC approach are illustrated with two simulated examples.

  4. ROBUST ONLINE MONITORING FOR CALIBRATION ASSESSMENT OF TRANSMITTERS AND INSTRUMENTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Tipireddy, Ramakrishna; Lerchen, Megan E.

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. Specifically, the next generation of OLM technology is expected to include newly developed advanced algorithms that improve monitoring of sensor/system performance and enable the use of plant data to derive information that currently cannot be measured. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this paper, we discuss an overview of research beingmore » performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or more sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation – fault detection and selection of acceptance criteria • Virtual sensing – signal value prediction and acceptance criteria • Response-time assessment – fault detection and acceptance criteria selection A GP-based uncertainty quantification (UQ) method previously developed for UQ in OLM, was adapted for use in sensor-fault detection and virtual sensing. For signal validation, the various components to the OLM residual (which is computed using an AAKR model) were explicitly defined and modeled using a GP. Evaluation was conducted using flow loop data from multiple sources. Results using experimental data from laboratory-scale flow loops indicate that the approach, while capable of detecting sensor drift, may be incapable of discriminating between sensor drift and model inadequacy. This may be due to a simplification applied in the initial modeling, where the sensor degradation is assumed to be stationary. In the case of virtual sensors, the GP model was used in a predictive mode to estimate the correct sensor reading for sensors that may have failed. Results have indicated the viability of using this approach for virtual sensing. However, the GP model has proven to be computationally expensive, and so alternative algorithms for virtual sensing are being evaluated. Finally, automated approaches to performing noise analysis for extracting sensor response time were developed. Evaluation of this technique using laboratory-scale data indicates that it compares well with manual techniques previously used for noise analysis. Moreover, the automated and manual approaches for noise analysis also compare well with the current “gold standard”, hydraulic ramp testing, for response time monitoring. Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  5. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.

  6. Fault zone reverberations from cross-correlations of earthquake waveforms and seismic noise

    NASA Astrophysics Data System (ADS)

    Hillers, Gregor; Campillo, Michel

    2016-03-01

    Seismic wavefields interact with low-velocity fault damage zones. Waveforms of ballistic fault zone head waves, trapped waves, reflected waves and signatures of trapped noise can provide important information on structural and mechanical fault zone properties. Here we extend the class of observable fault zone waves and reconstruct in-fault reverberations or multiples in a strike-slip faulting environment. Manifestations of the reverberations are significant, consistent wave fronts in the coda of cross-correlation functions that are obtained from scattered earthquake waveforms and seismic noise recorded by a linear fault zone array. The physical reconstruction of Green's functions is evident from the high similarity between the signals obtained from the two different scattered wavefields. Modal partitioning of the reverberation wavefield can be tuned using different data normalization techniques. The results imply that fault zones create their own ambiance, and that the here reconstructed reverberations are a key seismic signature of wear zones. Using synthetic waveform modelling we show that reverberations can be used for the imaging of structural units by estimating the location, extend and magnitude of lateral velocity contrasts. The robust reconstruction of the reverberations from noise records suggests the possibility to resolve the response of the damage zone material to various external and internal loading mechanisms.

  7. Active faulting in low- to moderate-seismicity regions: the SAFE project

    NASA Astrophysics Data System (ADS)

    Sebrier, M.; Safe Consortium

    2003-04-01

    SAFE (Slow Active Faults in Europe) is an EC-FP5 funded multidisciplinary effort which proposes an integrated European approach in identifying and characterizing active faults as input for evaluating seismic hazard in low- to moderate-seismicity regions. Seismically active western European regions are generally characterized by low hazard but high risk, due to the concentration of human and material properties with high vulnerability. Detecting, and then analysing, tectonic deformations that may lead to destructive earthquakes in such areas has to take into account three major limitations: - the typical climate of western Europe (heavy vegetation cover and/or erosion) ; - the subdued geomorphic signature of slowly deforming faults ; - the heavy modification of landscape by human activity. The main objective of SAFE, i.e., improving the assessment of seismic hazard through understanding of the mechanics and recurrence of active faults in slowly deforming regions, is achieved through four major steps : (1) extending geologic and geomorphic investigations of fault activity beyond the Holocene to take into account various time-windows; (2) developing an expert system that combines diverse lines of geologic, seismologic, geomorphic, and geophysical evidence to diagnose the existence and seismogenic potential of slow active faults; (3) delineating and characterising high seismic risk areas of western Europe, either from historical or geological/geomorphic evidence; (4) demonstrating and discussing the impact of the project results on risk assessment through a seismic scenario in the Basel-Mulhouse pilot area. To take properly into account known differences in source behavior, these goals are pursued both in extensional (Lower and Upper Rhine Graben, Catalan Coast) and compressional tectonic settings (southern Upper Rhine Graben, Po Plain, and Provence). Two arid compressional regions (SE Spain and Moroccan High Atlas) have also been selected to address the limitations imposed by vegetation and human modified landscapes. The first results demonstrate that the strong added value provided by SAFE consists in its integrated multidisciplinary and multiscalar approach that allows robust diagnostic conclusions on fault activity and on the associated earthquake potential. This approach will be illustrated through selected methodological results.

  8. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Evaluation of the crustal deformations in the northern region of Lake Nasser (Egypt) derived from 8 years of GPS campaign observations

    NASA Astrophysics Data System (ADS)

    Rayan, A.; Fernandes, R. M. S.; Khalil, H. A.; Mahmoud, S.; Miranda, J. M.; Tealab, A.

    2010-04-01

    The proper evaluation of crustal deformations in the Aswan (Egypt) region is crucial due to the existence of one major artificial structure: the Aswan High Dam. This construction induced the creation of one of the major artificial lakes: Lake Nasser, which has a surface area of about 5200 km 2 with a maximum capacity of 165 km 3. The lake is nearly 550 km long (more than 350 km within Egypt and the remainder in Sudan) and 35 km across at its widest point. Great attention has focused on this area after the November 14, 1981 earthquake ( ML = 5.7), with its epicenter southwest of the High Dam. In order to evaluate the present-day kinematics of the region, its relationship with increasing seismicity, and the possible influence of the Aswan High Dam operation, a network of 11 GPS sites was deployed in the area. This network has been reobserved every year since 2000 in campaign style. We present here the results of the analysis of the GPS campaign time-series. These time-series are already long enough to derive robust solutions for the motions of these stations. The computed trends are analyzed within the framework of the geophysical and geological settings of this region. We show that the observed displacements are significant, pointing to a coherent intraplate extensional deformation pattern, where some of the major faults (e.g., dextral strike-slip Kalabsha fault and normal Dabud fault) correspond to gradients of the surface deformation field. We also discuss the possible influence of the water load on the long-term deformation pattern.

  10. Decoupling control of a five-phase fault-tolerant permanent magnet motor by radial basis function neural network inverse

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Liu, Guohai; Xu, Dezhi; Xu, Liang; Xu, Gaohong; Aamir, Nazir

    2018-05-01

    This paper proposes a new decoupled control for a five-phase in-wheel fault-tolerant permanent magnet (IW-FTPM) motor drive, in which radial basis function neural network inverse (RBF-NNI) and internal model control (IMC) are combined. The RBF-NNI system is introduced into original system to construct a pseudo-linear system, and IMC is used as a robust controller. Hence, the newly proposed control system incorporates the merits of the IMC and RBF-NNI methods. In order to verify the proposed strategy, an IW-FTPM motor drive is designed based on dSPACE real-time control platform. Then, the experimental results are offered to verify that the d-axis current and the rotor speed are successfully decoupled. Besides, the proposed motor drive exhibits strong robustness even under load torque disturbance.

  11. An evaluation of costs and benefits of a vehicle periodic inspection scheme with six-monthly inspections compared to annual inspections.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2013-09-01

    Although previous research suggests that safety benefits accrue from periodic vehicle inspection programmes, little consideration has been given to whether the benefits are sufficient to justify the often considerable costs of such schemes. Methodological barriers impede many attempts to evaluate the overall safety benefits of periodic vehicle inspection schemes, including this study, which did not attempt to evaluate the New Zealand warrant of fitness scheme as a whole. Instead, this study evaluated one aspect of the scheme: the effects of doubling the inspection frequency, from annual to biannual, when the vehicle reaches six years of age. In particular, reductions in safety-related vehicle faults were estimated together with the value of the safety benefits compared to the costs. When merged crash data, licensing data and roadworthiness inspection data were analysed, there were estimated to be improvements in injury crash involvement rates and prevalence of safety-related faults of respectively 8% (95% CI 0.4-15%) and 13.5% (95% CI 12.8-14.2%) associated with the increase from annual to 6-monthly inspections. The wide confidence interval for the drop in crash rate shows considerably statistical uncertainty about the precise size of the drop. Even assuming that this proportion of vehicle faults prevented by doubling the inspection frequency could be maintained over the vehicle age range 7-20 years, the safety benefits are very unlikely to exceed the additional costs of the 6-monthly inspections to the motorists, valued at $NZ 500 million annually excluding the overall costs of administering the scheme. The New Zealand warrant of fitness scheme as a whole cannot be robustly evaluated using the analysis approach used here, but the safety benefits would need to be substantial--yielding an unlikely 12% reduction in injury crashes--for benefits to equal costs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  13. Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2014-11-01

    In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.

  14. Adaptive extended-state observer-based fault tolerant attitude control for spacecraft with reaction wheels

    NASA Astrophysics Data System (ADS)

    Ran, Dechao; Chen, Xiaoqian; de Ruiter, Anton; Xiao, Bing

    2018-04-01

    This study presents an adaptive second-order sliding control scheme to solve the attitude fault tolerant control problem of spacecraft subject to system uncertainties, external disturbances and reaction wheel faults. A novel fast terminal sliding mode is preliminarily designed to guarantee that finite-time convergence of the attitude errors can be achieved globally. Based on this novel sliding mode, an adaptive second-order observer is then designed to reconstruct the system uncertainties and the actuator faults. One feature of the proposed observer is that the design of the observer does not necessitate any priori information of the upper bounds of the system uncertainties and the actuator faults. In view of the reconstructed information supplied by the designed observer, a second-order sliding mode controller is developed to accomplish attitude maneuvers with great robustness and precise tracking accuracy. Theoretical stability analysis proves that the designed fault tolerant control scheme can achieve finite-time stability of the closed-loop system, even in the presence of reaction wheel faults and system uncertainties. Numerical simulations are also presented to demonstrate the effectiveness and superiority of the proposed control scheme over existing methodologies.

  15. FDI and Accommodation Using NN Based Techniques

    NASA Astrophysics Data System (ADS)

    Garcia, Ramon Ferreiro; de Miguel Catoira, Alberto; Sanz, Beatriz Ferreiro

    Massive application of dynamic backpropagation neural networks is used on closed loop control FDI (fault detection and isolation) tasks. The process dynamics is mapped by means of a trained backpropagation NN to be applied on residual generation. Process supervision is then applied to discriminate faults on process sensors, and process plant parameters. A rule based expert system is used to implement the decision making task and the corresponding solution in terms of faults accommodation and/or reconfiguration. Results show an efficient and robust FDI system which could be used as the core of an SCADA or alternatively as a complement supervision tool operating in parallel with the SCADA when applied on a heat exchanger.

  16. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  17. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  18. Model-based development of a fault signature matrix to improve solid oxide fuel cell systems on-site diagnosis

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario

    2015-04-01

    The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.

  19. GPS Measurements of Crustal Deformation in Lebanon: Implication for Current Kinematics of the Sinaï Plate.

    NASA Astrophysics Data System (ADS)

    Vergnolle, M.; Jomaa, R.; Brax, M.; Menut, J. L.; Sursock, A.; Elias, A. R.; Mariscal, A.; Vidal, M.; Cotte, N.

    2016-12-01

    The Levant fault is a major strike-slip fault bounding the Arabia and the Sinaï plates. Its kinematics, although understood in its main characteristics, remains partly unresolved in its quantification, especially in the Lebanese restraining bend. We present a GPS velocity field based on survey GPS data acquired in Lebanon (1999, 2002, 2010) and on continuous GPS data publicly available in the Levant area. To complete the measurements along the Levant fault, we combine our velocity field with previously published velocity fields. First, from our velocity field, we derive two velocity profiles, across the Lebanese fault system, which we analyze in terms of elastic strain accumulation. Despite the uncertainty on the locking depth of the main strand of the Levant fault, small lateral fault slip rates (2-4mm/yr) are detected on each profile, with a slight slip rate decrease (<1mm/yr) from south to north. The latter is consistent with published results south and north of Lebanon. Small compression (<0.5mm/yr), with most part of it located across Mount Lebanon, is also suggested. Second, we analyze the combined GPS velocity field in the Sinaï tectonic framework. We evaluate how well the Sinaï plate motion is described with an Euler pole. Due to heterogeneous velocity errors (5 times smaller for cGPS velocities wrt sGPS velocities), a unique pole estimation using all the data provides good statistical results. However, the residuals show systematic deviations at central and northern sGPS stations. Using only the velocities at these stations, the estimated pole is significantly different from the unique pole at 95% confidence level. This analysis highlights the difficulty to robustly resolve the rigid Sinaï plate motion while the uncertainties on the velocities are heterogeneous. New sGPS measurements on existing sites should improve the solution and help to conclude.

  20. Active fault tolerant control based on interval type-2 fuzzy sliding mode controller and non linear adaptive observer for 3-DOF laboratory helicopter.

    PubMed

    Zeghlache, Samir; Benslimane, Tarak; Bouguerra, Abderrahmen

    2017-11-01

    In this paper, a robust controller for a three degree of freedom (3 DOF) helicopter control is proposed in presence of actuator and sensor faults. For this purpose, Interval type-2 fuzzy logic control approach (IT2FLC) and sliding mode control (SMC) technique are used to design a controller, named active fault tolerant interval type-2 Fuzzy Sliding mode controller (AFTIT2FSMC) based on non-linear adaptive observer to estimate and detect the system faults for each subsystem of the 3-DOF helicopter. The proposed control scheme allows avoiding difficult modeling, attenuating the chattering effect of the SMC, reducing the rules number of the fuzzy controller. Exponential stability of the closed loop is guaranteed by using the Lyapunov method. The simulation results show that the AFTIT2FSMC can greatly alleviate the chattering effect, providing good tracking performance, even in presence of actuator and sensor faults. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  2. Real-Time Diagnosis of Faults Using a Bank of Kalman Filters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2006-01-01

    A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.

  3. Investigating Crustal Scale Fault Systems Controlling Volcanic and Hydrothermal Fluid Processes in the South-Central Andes, First Results from a Magnetotelluric Survey

    NASA Astrophysics Data System (ADS)

    Pearce, R.; Mitchell, T. M.; Moorkamp, M.; Araya, J.; Cembrano, J. M.; Yanez, G. A.; Hammond, J. O. S.

    2017-12-01

    At convergent plate boundaries, volcanic orogeny is largely controlled by major thrust fault systems that act as magmatic and hydrothermal fluid conduits through the crust. In the south-central Andes, the volcanically and seismically active Tinguiririca and Planchon-Peteroa volcanoes are considered to be tectonically related to the major El Fierro thrust fault system. These large scale reverse faults are characterized by 500 - 1000m wide hydrothermally altered fault cores, which possess a distinct conductive signature relative to surrounding lithology. In order to establish the subsurface architecture of these fault systems, such conductivity contrasts can be detected using the magnetotelluric method. In this study, LEMI fluxgate-magnetometer long-period and Metronix broadband MT data were collected at 21 sites in a 40km2 survey grid that surrounds this fault system and associated volcanic complexes. Multi-remote referencing techniques is used together with robust processing to obtain reliable impedance estimates between 100 Hz and 1,000s. Our preliminary inversion results provide evidence of structures within the 10 - 20 km depth range that are attributed to this fault system. Further inversions will be conducted to determine the approximate depth extent of these features, and ultimately provide constraints for future geophysical studies aimed to deduce the role of these faults in volcanic orogeny and hydrothermal fluid migration processes in this region of the Andes.

  4. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  5. Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.

    1981-01-01

    Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.

  6. Assuring SS7 dependability: A robustness characterization of signaling network elements

    NASA Astrophysics Data System (ADS)

    Karmarkar, Vikram V.

    1994-04-01

    Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.

  7. Real time health monitoring and control system methodology for flexible space structures

    NASA Astrophysics Data System (ADS)

    Jayaram, Sanjay

    This dissertation is concerned with the Near Real-time Autonomous Health Monitoring of Flexible Space Structures. The dynamics of multi-body flexible systems is uncertain due to factors such as high non-linearity, consideration of higher modal frequencies, high dimensionality, multiple inputs and outputs, operational constraints, as well as unexpected failures of sensors and/or actuators. Hence a systematic framework of developing a high fidelity, dynamic model of a flexible structural system needs to be understood. The fault detection mechanism that will be an integrated part of an autonomous health monitoring system comprises the detection of abnormalities in the sensors and/or actuators and correcting these detected faults (if possible). Applying the robust control law and the robust measures that are capable of detecting and recovering/replacing the actuators rectifies the actuator faults. The fault tolerant concept applied to the sensors will be in the form of an Extended Kalman Filter (EKF). The EKF is going to weigh the information coming from multiple sensors (redundant sensors used to measure the same information) and automatically identify the faulty sensors and weigh the best estimate from the remaining sensors. The mechanization is comprised of instrumenting flexible deployable panels (solar array) with multiple angular position and rate sensors connected to the data acquisition system. The sensors will give position and rate information of the solar panel in all three axes (i.e. roll, pitch and yaw). The position data corresponds to the steady state response and the rate data will give better insight on the transient response of the system. This is a critical factor for real-time autonomous health monitoring. MATLAB (and/or C++) software will be used for high fidelity modeling and fault tolerant mechanism.

  8. Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults

    PubMed Central

    Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon

    2015-01-01

    This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. PMID:25954956

  9. Robust fault diagnosis of physical systems in operation. Ph.D. Thesis - Rutgers - The State Univ.

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy Hamilton

    1991-01-01

    Ideas are presented and demonstrated for improved robustness in diagnostic problem solving of complex physical systems in operation, or operative diagnosis. The first idea is that graceful degradation can be viewed as reasoning at higher levels of abstraction whenever the more detailed levels proved to be incomplete or inadequate. A form of abstraction is defined that applies this view to the problem of diagnosis. In this form of abstraction, named status abstraction, two levels are defined. The lower level of abstraction corresponds to the level of detail at which most current knowledge-based diagnosis systems reason. At the higher level, a graph representation is presented that describes the real-world physical system. An incremental, constructive approach to manipulating this graph representation is demonstrated that supports certain characteristics of operative diagnosis. The suitability of this constructive approach is shown for diagnosing fault propagation behavior over time, and for sometimes diagnosing systems with feedback. A way is shown to represent different semantics in the same type of graph representation to characterize different types of fault propagation behavior. An approach is demonstrated that threats these different behaviors as different fault classes, and the approach moves to other classes when previous classes fail to generate suitable hypotheses. These ideas are implemented in a computer program named Draphys (Diagnostic Reasoning About Physical Systems) and demonstrated for the domain of inflight aircraft subsystems, specifically a propulsion system (containing two turbofan systems and a fuel system) and hydraulic subsystem.

  10. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  11. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  12. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.

  13. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  14. Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity

    NASA Astrophysics Data System (ADS)

    Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing

    2016-12-01

    The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.

  15. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  16. Remote Agent Experiment

    NASA Technical Reports Server (NTRS)

    Benard, Doug; Dorais, Gregory A.; Gamble, Ed; Kanefsky, Bob; Kurien, James; Millar, William; Muscettola, Nicola; Nayak, Pandu; Rouquette, Nicolas; Rajan, Kanna; hide

    2000-01-01

    Remote Agent (RA) is a model-based, reusable artificial intelligence (At) software system that enables goal-based spacecraft commanding and robust fault recovery. RA was flight validated during an experiment on board of DS1 between May 17th and May 21th, 1999.

  17. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  18. The Design of Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, C. Duane; Humphreys, William M.; Fijany, Amir

    2002-01-01

    As transistor geometries are reduced, quantum effects begin to dominate device performance. At some point, transistors cease to have the properties that make them useful computational components. New computing elements must be developed in order to keep pace with Moore s Law. Quantum dot cellular automata (QCA) represent an alternative paradigm to transistor-based logic. QCA architectures that are robust to manufacturing tolerances and defects must be developed. We are developing software that allows the exploration of fault tolerant QCA gate architectures by automating the specification, simulation, analysis and documentation processes.

  19. Preliminary report on geophysical data in Yavapai County, Arizona

    USGS Publications Warehouse

    Langenheim, V.E.; Hoffmann, J.P.; Blasch, K.W.; DeWitt, Ed; Wirt, Laurie

    2002-01-01

    Recently acquired geophysical data provide information on the geologic framework and its effect of groundwater flow and on stream/aquifer interaction in Yavapai County, Arizona. High-resolution aeromagnetic data reflect diverse rock types at and below the topographic surface and have permitted a preliminary interpretation of faults and underlying rock types (in particular, volcanic) that will provide new insights on the geologic framework, critical input to future hydrologic investigations. Aeromagnetic data map the western end of the Bear Wallow Canyon fault into the sedimentary fill of Verde Valley. Regional gravity data indicate potentially significant accumulations of low-density basin fill in Big Chino, Verde, and Williamson Valleys. Electrical and seismic data were also collected and help evaluate the approximate depth and extent of recent alluvium overlying Tertiary and Paleozoic sediments. These data will be used to ascertain the potential contribution of shallow ground-water subflow that cannot be measured by gages or flow meters and whether stream flow in losing reaches is moving as subflow or is being lost to the subsurface. The geophysical data will help produce a more robust groundwater flow model of the region.

  20. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Distributed fault displacements -

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Tonagi, M.

    2016-12-01

    Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  1. Strike-slip faulting in the Inner California Borderlands, offshore Southern California.

    NASA Astrophysics Data System (ADS)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.

    2015-12-01

    In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.

  2. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  3. Investigation of growth fault bend folding using discrete element modeling: Implications for signatures of active folding above blind thrust faults

    NASA Astrophysics Data System (ADS)

    Benesh, N. P.; Plesch, A.; Shaw, J. H.; Frost, E. K.

    2007-03-01

    Using the discrete element modeling method, we examine the two-dimensional nature of fold development above an anticlinal bend in a blind thrust fault. Our models were composed of numerical disks bonded together to form pregrowth strata overlying a fixed fault surface. This pregrowth package was then driven along the fault surface at a fixed velocity using a vertical backstop. Additionally, new particles were generated and deposited onto the pregrowth strata at a fixed rate to produce sequential growth layers. Models with and without mechanical layering were used, and the process of folding was analyzed in comparison with fold geometries predicted by kinematic fault bend folding as well as those observed in natural settings. Our results show that parallel fault bend folding behavior holds to first order in these models; however, a significant decrease in limb dip is noted for younger growth layers in all models. On the basis of comparisons to natural examples, we believe this deviation from kinematic fault bend folding to be a realistic feature of fold development resulting from an axial zone of finite width produced by materials with inherent mechanical strength. These results have important implications for how growth fold structures are used to constrain slip and paleoearthquake ages above blind thrust faults. Most notably, deformation localized about axial surfaces and structural relief across the fold limb seem to be the most robust observations that can readily constrain fault activity and slip. In contrast, fold limb width and shallow growth layer dips appear more variable and dependent on mechanical properties of the strata.

  4. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  5. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    PubMed

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Power maximization of variable-speed variable-pitch wind turbines using passive adaptive neural fault tolerant control

    NASA Astrophysics Data System (ADS)

    Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian

    2017-09-01

    Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.

  7. Interim reliability evaluation program, Browns Ferry fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, M.E.

    1981-01-01

    An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.

  8. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault. This result indicates that the fault displacement is difficult to appear at the edge of the fault displacement in Japan. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (NRA), Japan.

  9. Adaptive backstepping fault-tolerant control for flexible spacecraft with unknown bounded disturbances and actuator failures.

    PubMed

    Jiang, Ye; Hu, Qinglei; Ma, Guangfu

    2010-01-01

    In this paper, a robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheel/actuator failures, persistent bounded disturbances and unknown inertia parameter uncertainties. The controller is designed based on an adaptive backstepping sliding mode control scheme, and a sufficient condition under which this control law can render the system semi-globally input-to-state stable is also provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. Moreover, in the design, the control law does not need a fault detection and isolation mechanism even if the failure time instants, patterns and values on actuator failures are also unknown for the designers, as motivated from a practical spacecraft control application. In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, illustrative simulation results of an application to flexible spacecraft show that high precise attitude control and vibration suppression are successfully achieved using various scenarios of controlling effective failures. 2009. Published by Elsevier Ltd.

  10. Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework

    USGS Publications Warehouse

    Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.

    2007-01-01

    We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.

  11. Active fault characterization throughout the Caribbean and Central America for seismic hazard modeling

    NASA Astrophysics Data System (ADS)

    Styron, Richard; Pagani, Marco; Garcia, Julio

    2017-04-01

    The region encompassing Central America and the Caribbean is tectonically complex, defined by the Caribbean plate's interactions with the North American, South American and Cocos plates. Though active deformation over much of the region has received at least cursory investigation the past 50 years, the area is chronically understudied and lacks a modern, synoptic characterization. Regardless, the level of risk in the region - as dramatically demonstrated by the 2010 Haiti earthquake - remains high because of high-vulnerability buildings and dense urban areas home to over 100 million people, who are concentrated near plate boundaries and other major structures. As part of a broader program to study seismic hazard worldwide, the Global Earthquake Model Foundation is currently working to quantify seismic hazard in the region. To this end, we are compiling a database of active faults throughout the region that will be integrated into similar models as recently done in South America. Our initial compilation hosts about 180 fault traces in the region. The faults show a wide range of characteristics, reflecting the diverse styles of plate boundary and plate-margin deformation observed. Regional deformation ranges from highly localized faulting along well-defined strike-slip faults to broad zones of distributed normal or thrust faulting, and from readily-observable yet slowly-slipping structures to inferred faults with geodetically-measured slip rates >10 mm/yr but essentially no geomorphic expression. Furthermore, primary structures such as the Motagua-Polochic Fault Zone (the strike-slip plate boundary between the North American and Caribbean plates in Guatemala) display strong along-strike slip rate gradients, and many other structures are undersea for most or all of their length. A thorough assessment of seismic hazard in the region will require the integration of a range of datasets and techniques and a comprehensive characterization of epistemic uncertainties driving the overall variability of hazard and risk results. For this reason and in order to leverage from the knowledge available in the region, datasets and the hazard model will be developed in close collaboration with local experts coherently with GEM's principles of transparency and collaboration. For what pertains active faults in shallow crust, we are currently working on assigning slip rates to structures based on geologic and geodetic strain rates, though this will be challenging in areas of sparse constraints. An additional area of ongoing work is the delineation of 3D seismic sources from disjoint fault traces; we are currently evaluating methods for this. Though work in the region is challenging, we anticipate that our results will not only lead to more robust seismic hazard and risk estimates for the region, but may serve as a template for workflows in other zones of poor or inhomogeneous data.

  12. Development of a variable structure-based fault detection and diagnosis strategy applied to an electromechanical system

    NASA Astrophysics Data System (ADS)

    Gadsden, S. Andrew; Kirubarajan, T.

    2017-05-01

    Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.

  13. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    PubMed

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    NASA Technical Reports Server (NTRS)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  15. Engine rotor health monitoring: an experimental approach to fault detection and durability assessment

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, Ali; Woike, Mark R.; Clem, Michelle; Baaklini, George

    2015-03-01

    Efforts to update and improve turbine engine components in meeting flights safety and durability requirements are commitments that engine manufacturers try to continuously fulfill. Most of their concerns and developments energies focus on the rotating components as rotor disks. These components typically undergo rigorous operating conditions and are subject to high centrifugal loadings which subject them to various failure mechanisms. Thus, developing highly advanced health monitoring technology to screen their efficacy and performance is very essential to their prolonged service life and operational success. Nondestructive evaluation techniques are among the many screening methods that presently are being used to pre-detect hidden flaws and mini cracks prior to any appalling events occurrence. Most of these methods or procedures are confined to evaluating material's discontinuities and other defects that have mature to a point where failure is eminent. Hence, development of more robust techniques to pre-predict faults prior to any catastrophic events in these components is highly vital. This paper is focused on presenting research activities covering the ongoing research efforts at NASA Glenn Research Center (GRC) rotor dynamics laboratory in support of developing a fault detection system for key critical turbine engine components. Data obtained from spin test experiments of a rotor disk that relates to investigating behavior of blade tip clearance, tip timing and shaft displacement based on measured data acquired from sensor devices such as eddy current, capacitive and microwave are presented. Additional results linking test data with finite element modeling to characterize the structural durability of a cracked rotor as it relays to the experimental tests and findings is also presented. An obvious difference in the vibration response is shown between the notched and the baseline no notch rotor disk indicating the presence of some type of irregularity.

  16. Etude et simulation du protocole TTEthernet sur un sous-systeme de gestion de vols et adaptation de la planification des tâches a des fins de simulation

    NASA Astrophysics Data System (ADS)

    Abidi, Dhafer

    TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.

  17. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...

  18. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...

  19. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...

  20. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...

  1. 14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...

  2. Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myrent, Noah J.; Barrett, Natalie C.; Adams, Douglas E.

    2014-07-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by themore » presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability, revenue, and overall profit.« less

  3. Reexamination of the subsurface fault structure in the vicinity of the 1989 moment-magnitude-6.9 Loma Prieta earthquake, central California, using steep-reflection, earthquake, and magnetic data

    USGS Publications Warehouse

    Zhang, Edward; Fuis, Gary S.; Catchings, Rufus D.; Scheirer, Daniel S.; Goldman, Mark; Bauer, Klaus

    2018-06-13

    We reexamine the geometry of the causative fault structure of the 1989 moment-magnitude-6.9 Loma Prieta earthquake in central California, using seismic-reflection, earthquake-hypocenter, and magnetic data. Our study is prompted by recent interpretations of a two-part dip of the San Andreas Fault (SAF) accompanied by a flower-like structure in the Coachella Valley, in southern California. Initially, the prevailing interpretation of fault geometry in the vicinity of the Loma Prieta earthquake was that the mainshock did not rupture the SAF, but rather a secondary fault within the SAF system, because network locations of aftershocks defined neither a vertical plane nor a fault plane that projected to the surface trace of the SAF. Subsequent waveform cross-correlation and double-difference relocations of Loma Prieta aftershocks appear to have clarified the fault geometry somewhat, with steeply dipping faults in the upper crust possibly connecting to the more moderately southwest-dipping mainshock rupture in the middle crust. Examination of steep-reflection data, extracted from a 1991 seismic-refraction profile through the Loma Prieta area, reveals three robust fault-like features that agree approximately in geometry with the clusters of upper-crustal relocated aftershocks. The subsurface geometry of the San Andreas, Sargent, and Berrocal Faults can be mapped using these features and the aftershock clusters. The San Andreas and Sargent Faults appear to dip northeastward in the uppermost crust and change dip continuously toward the southwest with depth. Previous models of gravity and magnetic data on profiles through the aftershock region also define a steeply dipping SAF, with an initial northeastward dip in the uppermost crust that changes with depth. At a depth 6 to 9 km, upper-crustal faults appear to project into the moderately southwest-dipping, planar mainshock rupture. The change to a planar dipping rupture at 6–9 km is similar to fault geometry seen in the Coachella Valley.

  4. Linking Incoming Plate Faulting and Intermediate Depth Seismicity

    NASA Astrophysics Data System (ADS)

    Kwong, K. B.; van Zelst, I.; Tong, X.; Eimer, M. O.; Naif, S.; Hu, Y.; Zhan, Z.; Boneh, Y.; Schottenfels, E.; Miller, M. S.; Moresi, L. N.; Warren, J. M.; Wiens, D. A.

    2017-12-01

    Intermediate depth earthquakes, occurring between 70-350 km depth, are often attributed to dehydration reactions within the subducting plate. It is proposed that incoming plate normal faulting associated with plate bending at the trench may control the amount of hydration in the plate by producing large damage zones that create pathways for the infiltration of seawater deep into the subducting mantle. However, a relationship between incoming plate seismicity, faulting, and intermediate depth seismicity has not been established. We compiled a global dataset consisting of incoming plate earthquake moment tensor (CMT) solutions, focal depths, bend fault spacing and offset measurements, along with plate age and convergence rates. In addition, a global intermediate depth seismicity dataset was compiled with parameters such as the maximum seismic moment and seismicity rate, as well as thicknesses of double seismic zones. The maximum fault offset in the bending region has a strong correlation with the intermediate depth seismicity rate, but a more modest correlation with other parameters such as convergence velocity and plate age. We estimated the expected rate of seismic moment release for the incoming plate faults using mapped fault scarps from bathymetry. We compare this with the cumulative moment from normal faulting earthquakes in the incoming plate from the global CMT catalog to determine whether outer rise fault movement has an aseismic component. Preliminary results from Tonga and the Middle America Trench suggest there may be an aseismic component to incoming plate bending faulting. The cumulative seismic moment calculated for the outer rise faults will also be compared to the cumulative moment from intermediate depth earthquakes to assess whether these parameters are related. To support the observational part of this study, we developed a geodynamic numerical modeling study to systematically explore the influence of parameters such as plate age and convergence rate on the offset, depth, and spacing of outer rise faults. We then compare these robust constraints on outer rise faulting to the observed widths of intermediate depth earthquakes globally.

  5. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  6. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  7. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  8. Fault geometry inversion and slip distribution of the 2010 Mw 7.2 El Mayor-Cucapah earthquake from geodetic data

    NASA Astrophysics Data System (ADS)

    Huang, Mong-Han; Fielding, Eric J.; Dickinson, Haylee; Sun, Jianbao; Gonzalez-Ortega, J. Alejandro; Freed, Andrew M.; Bürgmann, Roland

    2017-01-01

    The 4 April 2010 Mw 7.2 El Mayor-Cucapah (EMC) earthquake in Baja, California, and Sonora, Mexico, had primarily right-lateral strike-slip motion and a minor normal-slip component. The surface rupture extended about 120 km in a NW-SE direction, west of the Cerro Prieto fault. Here we use geodetic measurements including near- to far-field GPS, interferometric synthetic aperture radar (InSAR), and subpixel offset measurements of radar and optical images to characterize the fault slip during the EMC event. We use dislocation inversion methods and determine an optimal nine-segment fault geometry, as well as a subfault slip distribution from the geodetic measurements. With systematic perturbation of the fault dip angles, randomly removing one geodetic data constraint, or different data combinations, we are able to explore the robustness of the inferred slip distribution along fault strike and depth. The model fitting residuals imply contributions of early postseismic deformation to the InSAR measurements as well as lateral heterogeneity in the crustal elastic structure between the Peninsular Ranges and the Salton Trough. We also find that with incorporation of near-field geodetic data and finer fault patch size, the shallow slip deficit is reduced in the EMC event by reductions in the level of smoothing. These results show that the outcomes of coseismic inversions can vary greatly depending on model parameterization and methodology.

  9. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  10. An Application of Hydraulic Tomography to a Large-Scale Fractured Granite Site, Mizunami, Japan.

    PubMed

    Zha, Yuanyuan; Yeh, Tian-Chyi J; Illman, Walter A; Tanaka, Tatsuya; Bruines, Patrick; Onoe, Hironori; Saegusa, Hiromitsu; Mao, Deqiang; Takeuchi, Shinji; Wen, Jet-Chau

    2016-11-01

    While hydraulic tomography (HT) is a mature aquifer characterization technology, its applications to characterize hydrogeology of kilometer-scale fault and fracture zones are rare. This paper sequentially analyzes datasets from two new pumping tests as well as those from two previous pumping tests analyzed by Illman et al. (2009) at a fractured granite site in Mizunami, Japan. Results of this analysis show that datasets from two previous pumping tests at one side of a fault zone as used in the previous study led to inaccurate mapping of fracture and fault zones. Inclusion of the datasets from the two new pumping tests (one of which was conducted on the other side of the fault) yields locations of the fault zone consistent with those based on geological mapping. The new datasets also produce a detailed image of the irregular fault zone, which is not available from geological investigation alone and the previous study. As a result, we conclude that if prior knowledge about geological structures at a field site is considered during the design of HT surveys, valuable non-redundant datasets about the fracture and fault zones can be collected. Only with these non-redundant data sets, can HT then be a viable and robust tool for delineating fracture and fault distributions over kilometer scales, even when only a limited number of boreholes are available. In essence, this paper proves that HT is a new tool for geologists, geophysicists, and engineers for mapping large-scale fracture and fault zone distributions. © 2016, National Ground Water Association.

  11. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    NASA Astrophysics Data System (ADS)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new map-based structural restoration techniques, we find that the tear faults have distinct displacement patterns that distinguish them from conventional strike-slip faults and reflect their roles in accommodating displacement gradients within the fold-and-thrust belt.

  12. Shocking Path of Least Resistance Shines Light on Subsurface by Revealing the Paths of Water and the Presence of Faults: Stacked EM Case Studies over Barite Hills Superfund Site in South Carolina

    NASA Astrophysics Data System (ADS)

    Haggar, K. S.; Nelson, H. R., Jr.; Berent, L. J.

    2017-12-01

    The Barite Hills/Nevada Gold Fields mines are in Late Proterozoic and early Paleozoic rocks of the gold and iron sulfides rich Carolina slate belt. The mines were active from 1989 to1995. EPA and USGS site investigations in 2003 resulted in the declaration of the waste pit areas as a superfund site. The USGS and private consulting firms have evaluated subsurface water flow paths, faults & other groundwater-related features at this superfund site utilizing 2-D conductivity & 3-D electromagnetic (EM) surveys. The USGS employed conductivity to generate instantaneous 2-D profiles to evaluate shallow groundwater patterns. Porous regolith sediments, contaminated water & mine debris have high conductivity whereas bedrock is identified by its characteristic low conductivity readings. Consulting contractors integrated EM technology, magnetic & shallow well data to generate 3-D images of groundwater flow paths at given depths across the superfund site. In so doing several previously undetected faults were identified. Lighting strike data was integrated with the previously evaluated electrical and EM data to determine whether this form of natural-sourced EM data could complement and supplement the more traditional geophysical data described above. Several lightning attributes derived from 3-D lightning volumes were found to correlate to various features identified in the previous geophysical studies. Specifically, the attributes Apparent Resistivity, Apparent Permittivity, Peak Current & Tidal Gravity provided the deepest structural geological framework & provided insights into rock properties & earth tides. Most significantly, Peak Current showed remarkable coincidence with the preferred groundwater flow map identified by one of the contractors utilizing EM technology. This study demonstrates the utility of robust integrated EM technology applications for projects focused on hydrology, geohazards to dams, levees, and structures, as well as mineral and hydrocarbon exploration.

  13. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  14. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  15. Can compliant fault zones be used to measure absolute stresses in the upper crust?

    NASA Astrophysics Data System (ADS)

    Hearn, E. H.; Fialko, Y.

    2009-04-01

    Geodetic and seismic observations reveal long-lived zones with reduced elastic moduli along active crustal faults. These fault zones localize strain from nearby earthquakes, consistent with the response of a compliant, elastic layer. Fault zone trapped wave studies documented a small reduction in P and S wave velocities along the Johnson Valley Fault caused by the 1999 Hector Mine earthquake. This reduction presumably perturbed a permanent compliant structure associated with the fault. The inferred changes in the fault zone compliance may produce a measurable deformation in response to background (tectonic) stresses. This deformation should have the same sense as the background stress, rather than the coseismic stress change. Here we investigate how the observed deformation of compliant zones in the Mojave Desert can be used to constrain the fault zone structure and stresses in the upper crust. We find that gravitational contraction of the coseismically softened zones should cause centimeters of coseismic subsidence of both the compliant zones and the surrounding region, unless the compliant fault zones are shallow and narrow, or essentially incompressible. We prefer the latter interpretation because profiles of line of sight displacements across compliant zones cannot be fit by a narrow, shallow compliant zone. Strain of the Camp Rock and Pinto Mountain fault zones during the Hector Mine and Landers earthquakes suggests that background deviatoric stresses are broadly consistent with Mohr-Coulomb theory in the Mojave upper crust (with μ ≥ 0.7). Large uncertainties in Mojave compliant zone properties and geometry preclude more precise estimates of crustal stresses in this region. With improved imaging of the geometry and elastic properties of compliant zones, and with precise measurements of their strain in response to future earthquakes, the modeling approach we describe here may eventually provide robust estimates of absolute crustal stress.

  16. A recurrent neural-network-based sensor and actuator fault detection and isolation for nonlinear systems with application to the satellite's attitude control subsystem.

    PubMed

    Talebi, H A; Khorasani, K; Tafazoli, S

    2009-01-01

    This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.

  17. From coseismic offsets to fault-block mountains

    USGS Publications Warehouse

    Thompson, George A.; Parsons, Thomas E.

    2017-01-01

    In the Basin and Range extensional province of the western United States, coseismic offsets, under the influence of gravity, display predominantly subsidence of the basin side (fault hanging wall), with comparatively little or no uplift of the mountainside (fault footwall). A few decades later, geodetic measurements [GPS and interferometric synthetic aperture radar (InSAR)] show broad (∼100 km) aseismic uplift symmetrically spanning the fault zone. Finally, after millions of years and hundreds of fault offsets, the mountain blocks display large uplift and tilting over a breadth of only about 10 km. These sparse but robust observations pose a problem in that the coesismic uplifts of the footwall are small and inadequate to raise the mountain blocks. To address this paradox we develop finite-element models subjected to extensional and gravitational forces to study time-varying deformation associated with normal faulting. Stretching the model under gravity demonstrates that asymmetric slip via collapse of the hanging wall is a natural consequence of coseismic deformation. Focused flow in the upper mantle imposed by deformation of the lower crust localizes uplift, which is predicted to take place within one to two decades after each large earthquake. Thus, the best-preserved topographic signature of earthquakes is expected to occur early in the postseismic period.

  18. From coseismic offsets to fault-block mountains

    PubMed Central

    Thompson, George A.

    2017-01-01

    In the Basin and Range extensional province of the western United States, coseismic offsets, under the influence of gravity, display predominantly subsidence of the basin side (fault hanging wall), with comparatively little or no uplift of the mountainside (fault footwall). A few decades later, geodetic measurements [GPS and interferometric synthetic aperture radar (InSAR)] show broad (∼100 km) aseismic uplift symmetrically spanning the fault zone. Finally, after millions of years and hundreds of fault offsets, the mountain blocks display large uplift and tilting over a breadth of only about 10 km. These sparse but robust observations pose a problem in that the coesismic uplifts of the footwall are small and inadequate to raise the mountain blocks. To address this paradox we develop finite-element models subjected to extensional and gravitational forces to study time-varying deformation associated with normal faulting. Stretching the model under gravity demonstrates that asymmetric slip via collapse of the hanging wall is a natural consequence of coseismic deformation. Focused flow in the upper mantle imposed by deformation of the lower crust localizes uplift, which is predicted to take place within one to two decades after each large earthquake. Thus, the best-preserved topographic signature of earthquakes is expected to occur early in the postseismic period. PMID:28847962

  19. From coseismic offsets to fault-block mountains.

    PubMed

    Thompson, George A; Parsons, Tom

    2017-09-12

    In the Basin and Range extensional province of the western United States, coseismic offsets, under the influence of gravity, display predominantly subsidence of the basin side (fault hanging wall), with comparatively little or no uplift of the mountainside (fault footwall). A few decades later, geodetic measurements [GPS and interferometric synthetic aperture radar (InSAR)] show broad (∼100 km) aseismic uplift symmetrically spanning the fault zone. Finally, after millions of years and hundreds of fault offsets, the mountain blocks display large uplift and tilting over a breadth of only about 10 km. These sparse but robust observations pose a problem in that the coesismic uplifts of the footwall are small and inadequate to raise the mountain blocks. To address this paradox we develop finite-element models subjected to extensional and gravitational forces to study time-varying deformation associated with normal faulting. Stretching the model under gravity demonstrates that asymmetric slip via collapse of the hanging wall is a natural consequence of coseismic deformation. Focused flow in the upper mantle imposed by deformation of the lower crust localizes uplift, which is predicted to take place within one to two decades after each large earthquake. Thus, the best-preserved topographic signature of earthquakes is expected to occur early in the postseismic period.

  20. From coseismic offsets to fault-block mountains

    NASA Astrophysics Data System (ADS)

    Thompson, George A.; Parsons, Tom

    2017-09-01

    In the Basin and Range extensional province of the western United States, coseismic offsets, under the influence of gravity, display predominantly subsidence of the basin side (fault hanging wall), with comparatively little or no uplift of the mountainside (fault footwall). A few decades later, geodetic measurements [GPS and interferometric synthetic aperture radar (InSAR)] show broad (˜100 km) aseismic uplift symmetrically spanning the fault zone. Finally, after millions of years and hundreds of fault offsets, the mountain blocks display large uplift and tilting over a breadth of only about 10 km. These sparse but robust observations pose a problem in that the coesismic uplifts of the footwall are small and inadequate to raise the mountain blocks. To address this paradox we develop finite-element models subjected to extensional and gravitational forces to study time-varying deformation associated with normal faulting. Stretching the model under gravity demonstrates that asymmetric slip via collapse of the hanging wall is a natural consequence of coseismic deformation. Focused flow in the upper mantle imposed by deformation of the lower crust localizes uplift, which is predicted to take place within one to two decades after each large earthquake. Thus, the best-preserved topographic signature of earthquakes is expected to occur early in the postseismic period.

  1. Plan for the Characterization of HIRF Effects on a Fault-Tolerant Computer Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.

    2008-01-01

    This report presents the plan for the characterization of the effects of high intensity radiated fields on a prototype implementation of a fault-tolerant data communication system. Various configurations of the communication system will be tested. The prototype system is implemented using off-the-shelf devices. The system will be tested in a closed-loop configuration with extensive real-time monitoring. This test is intended to generate data suitable for the design of avionics health management systems, as well as redundancy management mechanisms and policies for robust distributed processing architectures.

  2. Geophysical study of the East Pacific Rise 15°N-17°N: An unusually robust segment

    NASA Astrophysics Data System (ADS)

    Weiland, Charles M.; MacDonald, Ken C.

    1996-09-01

    Bathymetric, side-scan sonar, magnetic and gravity data from the East Pacific Rise (EPR) between 15° and 17°N are used to establish the spreading history and examine melt delivery to an unusually robust spreading segment. The axial ridge between the Orozco transform fault (15°30'N) and the 16°20'N overlapping spreading center (OSC) has an average elevation of 2300 m which is 300 m shallower than typical EPR depths, and its cross-sectional area is double the average value for the northern EPR. The total opening rate is 86 km/Myr, but the inflated segment appears to have spread faster to the east by more than 20% since 0.78 Ma. The orientation of magnetic isochrons and lineaments in the side-scan sonar indicates a ˜3° counterclockwise rotation of the spreading direction since 1.8 Ma (C2) and reflects a change in the Pacific-Cocos plate motion. The side-scan lineaments also show that the percentage of inward facing faults (83%) and the spacing between faults (1.5 km) are consistent with the spreading rate dependence shown by Carbotte and Macdonald [1994]. However, the mean fault length (4.8 km) is 1.5 km shorter than expected for the spreading rate and suggests that extensive off-axis volcanism has draped the faults. Gravity analysis shows that the inflated segment has a ˜12-mGal bull's eye shaped low in residual mantle Bouguer anomaly. We offer several possible end-member models for the anomaly, including a prism of 10% partial melt in the mantle and lower crust or a crustal thickness anomaly of 2.25 km. Kinematic modeling that is based on structure and magnetic data suggests that two large magmatic pulses occurred at approximately 0.8 Ma and 0.3 Ma and have reshaped the plate boundary geometry and inflated the segment.

  3. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  4. Markov chain algorithms: a template for building future robust low-power systems

    PubMed Central

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  5. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal

    keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  6. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  7. Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  8. Guidance, Navigation, and Control System Design in a Mass Reduction Exercise

    NASA Technical Reports Server (NTRS)

    Crain, Timothy; Begly, Michael; Jackson, Mark; Broome, Joel

    2008-01-01

    Early Orion GN&C system designs optimized for robustness, simplicity, and utilization of commercially available components. During the System Definition Review (SDR), all subsystems on Orion were asked to re-optimize with component mass and steady state power as primary design metrics. The objective was to create a mass reserve in the Orion point of departure vehicle design prior to beginning the PDR analysis cycle. The Orion GN&C subsystem team transitioned from a philosophy of absolute 2 fault tolerance for crew safety and 1 fault tolerance for mission success to an approach of 1 fault tolerance for crew safety and risk based redundancy to meet probability allocations of loss of mission and loss of crew. This paper will discuss the analyses, rationale, and end results of this activity regarding Orion navigation sensor hardware, control effectors, and trajectory design.

  9. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1995-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  10. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1993-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  11. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  12. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles

    PubMed Central

    Meng, Xiaoli

    2017-01-01

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization. PMID:28926996

  13. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles.

    PubMed

    Meng, Xiaoli; Wang, Heng; Liu, Bingbing

    2017-09-18

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.

  14. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  15. Virtual Platform for See Robustness Verification of Bootloader Embedded Software on Board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.

    2013-05-01

    Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.

  16. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  17. Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 3: FTMP test and evaluation

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, T. B., III

    1983-01-01

    The experimental test and evaluation of the Fault-Tolerant Multiprocessor (FTMP) is described. Major objectives of this exercise include expanding validation envelope, building confidence in the system, revealing any weaknesses in the architectural concepts and in their execution in hardware and software, and in general, stressing the hardware and software. To this end, pin-level faults were injected into one LRU of the FTMP and the FTMP response was measured in terms of fault detection, isolation, and recovery times. A total of 21,055 stuck-at-0, stuck-at-1 and invert-signal faults were injected in the CPU, memory, bus interface circuits, Bus Guardian Units, and voters and error latches. Of these, 17,418 were detected. At least 80 percent of undetected faults are estimated to be on unused pins. The multiprocessor identified all detected faults correctly and recovered successfully in each case. Total recovery time for all faults averaged a little over one second. This can be reduced to half a second by including appropriate self-tests.

  18. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  19. Adiabatic gate teleportation.

    PubMed

    Bacon, Dave; Flammia, Steven T

    2009-09-18

    The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.

  20. An L-band interferometric synthetic aperture radar study on the Ganos section of the north Anatolian fault zone between 2007 and 2011: Evidence for along strike segmentation and creep in a shallow fault patch.

    PubMed

    de Michele, Marcello; Ergintav, Semih; Aochi, Hideo; Raucoules, Daniel

    2017-01-01

    We utilize L-band interferometric synthetic aperture radar (InSAR) data in this study to retrieve a ground velocity map for the near field of the Ganos section of the north Anatolian fault (NAF) zone. The segmentation and creep distribution of this section, which last ruptured in 1912 to generate a moment magnitude (Mw)7.3 earthquake, remains incompletely understood. Because InSAR processing removes the mean orbital plane, we do not investigate large scale displacements due to regional tectonics in this study as these can be determined using global positioning system (GPS) data, instead concentrating on the close-to-the-fault displacement field. Our aim is to determine whether, or not, it is possible to retrieve robust near field velocity maps from stacking L-band interferograms, combining both single and dual polarization SAR data. In addition, we discuss whether a crustal velocity map can be used to complement GPS observations in an attempt to discriminate the present-day surface displacement of the Ganos fault (GF) across multiple segments. Finally, we characterize the spatial distribution of creep on shallow patches along multiple along-strike segments at shallow depths. Our results suggest the presence of fault segmentation along strike as well as creep on the shallow part of the fault (i.e. the existence of a shallow creeping patch) or the presence of a smoother section on the fault plane. Data imply a heterogeneous fault plane with more complex mechanics than previously thought. Because this study improves our knowledge of the mechanisms underlying the GF, our results have implications for local seismic hazard assessment.

  1. An L-band interferometric synthetic aperture radar study on the Ganos section of the north Anatolian fault zone between 2007 and 2011: Evidence for along strike segmentation and creep in a shallow fault patch

    PubMed Central

    Ergintav, Semih; Aochi, Hideo; Raucoules, Daniel

    2017-01-01

    We utilize L-band interferometric synthetic aperture radar (InSAR) data in this study to retrieve a ground velocity map for the near field of the Ganos section of the north Anatolian fault (NAF) zone. The segmentation and creep distribution of this section, which last ruptured in 1912 to generate a moment magnitude (Mw)7.3 earthquake, remains incompletely understood. Because InSAR processing removes the mean orbital plane, we do not investigate large scale displacements due to regional tectonics in this study as these can be determined using global positioning system (GPS) data, instead concentrating on the close-to-the-fault displacement field. Our aim is to determine whether, or not, it is possible to retrieve robust near field velocity maps from stacking L-band interferograms, combining both single and dual polarization SAR data. In addition, we discuss whether a crustal velocity map can be used to complement GPS observations in an attempt to discriminate the present-day surface displacement of the Ganos fault (GF) across multiple segments. Finally, we characterize the spatial distribution of creep on shallow patches along multiple along-strike segments at shallow depths. Our results suggest the presence of fault segmentation along strike as well as creep on the shallow part of the fault (i.e. the existence of a shallow creeping patch) or the presence of a smoother section on the fault plane. Data imply a heterogeneous fault plane with more complex mechanics than previously thought. Because this study improves our knowledge of the mechanisms underlying the GF, our results have implications for local seismic hazard assessment. PMID:28961264

  2. Airborne LiDAR analysis and geochronology of faulted glacial moraines in the Tahoe-Sierra frontal fault zone reveal substantial seismic hazards in the Lake Tahoe region, California-Nevada USA

    USGS Publications Warehouse

    Howle, James F.; Bawden, Gerald W.; Schweickert, Richard A.; Finkel, Robert C.; Hunter, Lewis E.; Rose, Ronn S.; von Twistern, Brent

    2012-01-01

    We integrated high-resolution bare-earth airborne light detection and ranging (LiDAR) imagery with field observations and modern geochronology to characterize the Tahoe-Sierra frontal fault zone, which forms the neotectonic boundary between the Sierra Nevada and the Basin and Range Province west of Lake Tahoe. The LiDAR imagery clearly delineates active normal faults that have displaced late Pleistocene glacial moraines and Holocene alluvium along 30 km of linear, right-stepping range front of the Tahoe-Sierra frontal fault zone. Herein, we illustrate and describe the tectonic geomorphology of faulted lateral moraines. We have developed new, three-dimensional modeling techniques that utilize the high-resolution LiDAR data to determine tectonic displacements of moraine crests and alluvium. The statistically robust displacement models combined with new ages of the displaced Tioga (20.8 ± 1.4 ka) and Tahoe (69.2 ± 4.8 ka; 73.2 ± 8.7 ka) moraines are used to estimate the minimum vertical separation rate at 17 sites along the Tahoe-Sierra frontal fault zone. Near the northern end of the study area, the minimum vertical separation rate is 1.5 ± 0.4 mm/yr, which represents a two- to threefold increase in estimates of seismic moment for the Lake Tahoe basin. From this study, we conclude that potential earthquake moment magnitudes (Mw) range from 6.3 ± 0.25 to 6.9 ± 0.25. A close spatial association of landslides and active faults suggests that landslides have been seismically triggered. Our study underscores that the Tahoe-Sierra frontal fault zone poses substantial seismic and landslide hazards.

  3. Testing For EM Upsets In Aircraft Control Computers

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1994-01-01

    Effects of transient electrical signals evaluated in laboratory tests. Method of evaluating nominally fault-tolerant, aircraft-type digital-computer-based control system devised. Provides for evaluation of susceptibility of system to upset and evaluation of integrity of control when system subjected to transient electrical signals like those induced by electromagnetic (EM) source, in this case lightning. Beyond aerospace applications, fault-tolerant control systems becoming more wide-spread in industry; such as in automobiles. Method supports practical, systematic tests for evaluation of designs of fault-tolerant control systems.

  4. Intermittent/transient fault phenomena in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.

    1977-01-01

    An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.

  5. Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.

    PubMed

    Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter

    2012-08-01

    An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Power conditioning using dynamic voltage restorers under different voltage sag types.

    PubMed

    Saeed, Ahmed M; Abdel Aleem, Shady H E; Ibrahim, Ahmed M; Balci, Murat E; El-Zahab, Essam E A

    2016-01-01

    Voltage sags can be symmetrical or unsymmetrical depending on the causes of the sag. At the present time, one of the most common procedures for mitigating voltage sags is by the use of dynamic voltage restorers (DVRs). By definition, a DVR is a controlled voltage source inserted between the network and a sensitive load through a booster transformer injecting voltage into the network in order to correct any disturbance affecting a sensitive load voltage. In this paper, modelling of DVR for voltage correction using MatLab software is presented. The performance of the device under different voltage sag types is described, where the voltage sag types are introduced using the different types of short-circuit faults included in the environment of the MatLab/Simulink package. The robustness of the proposed device is evaluated using the common voltage sag indices, while taking into account voltage and current unbalance percentages, where maintaining the total harmonic distortion percentage of the load voltage within a specified range is desired. Finally, several simulation results are shown in order to highlight that the DVR is capable of effective correction of the voltage sag while minimizing the grid voltage unbalance and distortion, regardless of the fault type.

  7. Power conditioning using dynamic voltage restorers under different voltage sag types

    PubMed Central

    Saeed, Ahmed M.; Abdel Aleem, Shady H.E.; Ibrahim, Ahmed M.; Balci, Murat E.; El-Zahab, Essam E.A.

    2015-01-01

    Voltage sags can be symmetrical or unsymmetrical depending on the causes of the sag. At the present time, one of the most common procedures for mitigating voltage sags is by the use of dynamic voltage restorers (DVRs). By definition, a DVR is a controlled voltage source inserted between the network and a sensitive load through a booster transformer injecting voltage into the network in order to correct any disturbance affecting a sensitive load voltage. In this paper, modelling of DVR for voltage correction using MatLab software is presented. The performance of the device under different voltage sag types is described, where the voltage sag types are introduced using the different types of short-circuit faults included in the environment of the MatLab/Simulink package. The robustness of the proposed device is evaluated using the common voltage sag indices, while taking into account voltage and current unbalance percentages, where maintaining the total harmonic distortion percentage of the load voltage within a specified range is desired. Finally, several simulation results are shown in order to highlight that the DVR is capable of effective correction of the voltage sag while minimizing the grid voltage unbalance and distortion, regardless of the fault type. PMID:26843975

  8. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a 2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.

  9. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan

    In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  10. Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems

    NASA Astrophysics Data System (ADS)

    Pourarian, Shokouh

    Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.

  11. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  12. FDI based on Artificial Neural Network for Low-Voltage-Ride-Through in DFIG-based Wind Turbine.

    PubMed

    Adouni, Amel; Chariag, Dhia; Diallo, Demba; Ben Hamed, Mouna; Sbita, Lassaâd

    2016-09-01

    As per modern electrical grid rules, Wind Turbine needs to operate continually even in presence severe grid faults as Low Voltage Ride Through (LVRT). Hence, a new LVRT Fault Detection and Identification (FDI) procedure has been developed to take the appropriate decision in order to develop the convenient control strategy. To obtain much better decision and enhanced FDI during grid fault, the proposed procedure is based on voltage indicators analysis using a new Artificial Neural Network architecture (ANN). In fact, two features are extracted (the amplitude and the angle phase). It is divided into two steps. The first is fault indicators generation and the second is indicators analysis for fault diagnosis. The first step is composed of six ANNs which are dedicated to describe the three phases of the grid (three amplitudes and three angle phases). Regarding to the second step, it is composed of a single ANN which analysis the indicators and generates a decision signal that describes the function mode (healthy or faulty). On other hand, the decision signal identifies the fault type. It allows distinguishing between the four faulty types. The diagnosis procedure is tested in simulation and experimental prototype. The obtained results confirm and approve its efficiency, rapidity, robustness and immunity to the noise and unknown inputs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Seismic swarms and diffuse fracturing within Triassic evaporites fed by deep degassing along the low-angle Alto Tiberina normal fault (central Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, Nicola; Giacomuzzi, Genny; Chiarabba, Claudio

    2017-01-01

    We present high-resolution elastic models and relocated seismicity of a very active segment of the Apennines normal faulting system, computed via transdimensional local earthquake tomography (trans-D LET). Trans-D LET, a fully nonlinear approach to seismic tomography, robustly constrains high-velocity anomalies and inversions of P wave velocity, i.e., decreases of VP with depth, without introducing bias due to, e.g., a starting model, and giving the possibility to investigate the relation between fault structure, seismicity, and fluids. Changes in seismicity rate and recurring seismic swarms are frequent in the Apennines extensional belt. Deep fluids, upwelling from the delaminating continental lithosphere, are thought to be responsible for seismicity clustering in the upper crust and lubrication of normal faults during swarms and large earthquakes. We focus on the tectonic role played by the Alto Tiberina low-angle normal fault (ATF), finding displacements across the fault consistent with long-term accommodation of deformation. Our results show that recent seismic swarms affecting the area occur within a 3 km thick, high VP/VS, densely cracked, and overpressurized evaporitic layer, composed of dolostones and anhydrites. A persistent low VP, low VP/VS volume, present on top of and along the ATF low-angle detachment, traces the location of mantle-derived CO2, the upward flux of which contributes to cracking within the evaporitic layer.

  14. A Convex Approach to Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.

  15. Using cluster analysis to organize and explore regional GPS velocities

    USGS Publications Warehouse

    Simpson, Robert W.; Thatcher, Wayne; Savage, James C.

    2012-01-01

    Cluster analysis offers a simple visual exploratory tool for the initial investigation of regional Global Positioning System (GPS) velocity observations, which are providing increasingly precise mappings of actively deforming continental lithosphere. The deformation fields from dense regional GPS networks can often be concisely described in terms of relatively coherent blocks bounded by active faults, although the choice of blocks, their number and size, can be subjective and is often guided by the distribution of known faults. To illustrate our method, we apply cluster analysis to GPS velocities from the San Francisco Bay Region, California, to search for spatially coherent patterns of deformation, including evidence of block-like behavior. The clustering process identifies four robust groupings of velocities that we identify with four crustal blocks. Although the analysis uses no prior geologic information other than the GPS velocities, the cluster/block boundaries track three major faults, both locked and creeping.

  16. Induction motor broken rotor bar fault location detection through envelope analysis of start-up current using Hilbert transform

    NASA Astrophysics Data System (ADS)

    Abd-el-Malek, Mina; Abdelsalam, Ahmed K.; Hassan, Ola E.

    2017-09-01

    Robustness, low running cost and reduced maintenance lead Induction Motors (IMs) to pioneerly penetrate the industrial drive system fields. Broken rotor bars (BRBs) can be considered as an important fault that needs to be early assessed to minimize the maintenance cost and labor time. The majority of recent BRBs' fault diagnostic techniques focus on differentiating between healthy and faulty rotor cage. In this paper, a new technique is proposed for detecting the location of the broken bar in the rotor. The proposed technique relies on monitoring certain statistical parameters estimated from the analysis of the start-up stator current envelope. The envelope of the signal is obtained using Hilbert Transformation (HT). The proposed technique offers non-invasive, fast computational and accurate location diagnostic process. Various simulation scenarios are presented that validate the effectiveness of the proposed technique.

  17. A new iterative approach for multi-objective fault detection observer design and its application to a hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Huang, Di; Duan, Zhisheng

    2018-03-01

    This paper addresses the multi-objective fault detection observer design problems for a hypersonic vehicle. Owing to the fact that parameters' variations, modelling errors and disturbances are inevitable in practical situations, system uncertainty is considered in this study. By fully utilising the orthogonal space information of output matrix, some new understandings are proposed for the construction of Lyapunov matrix. Sufficient conditions for the existence of observers to guarantee the fault sensitivity and disturbance robustness in infinite frequency domain are presented. In order to further relax the conservativeness, slack matrices are introduced to fully decouple the observer gain with the Lyapunov matrices in finite frequency range. Iterative linear matrix inequality algorithms are proposed to obtain the solutions. The simulation examples which contain a Monte Carlo campaign illustrate that the new methods can effectively reduce the design conservativeness compared with the existing methods.

  18. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  19. Integrated multiple-model adaptive fault identification and reconfigurable fault-tolerant control for Lead-Wing close formation systems

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Jiang, Bin; Zhang, Ke

    2018-03-01

    This paper investigates the attitude and position tracking control problem for Lead-Wing close formation systems in the presence of loss of effectiveness and lock-in-place or hardover failure. In close formation flight, Wing unmanned aerial vehicle movements are influenced by vortex effects of the neighbouring Lead unmanned aerial vehicle. This situation allows modelling of aerodynamic coupling vortex-effects and linearisation based on optimal close formation geometry. Linearised Lead-Wing close formation model is transformed into nominal robust H-infinity models with respect to Mach hold, Heading hold, and Altitude hold autopilots; static feedback H-infinity controller is designed to guarantee effective tracking of attitude and position while manoeuvring Lead unmanned aerial vehicle. Based on H-infinity control design, an integrated multiple-model adaptive fault identification and reconfigurable fault-tolerant control scheme is developed to guarantee asymptotic stability of close-loop systems, error signal boundedness, and attitude and position tracking properties. Simulation results for Lead-Wing close formation systems validate the efficiency of the proposed integrated multiple-model adaptive control algorithm.

  20. A critical evaluation of crustal dehydration as the cause of an overpressured and weak San Andreas Fault

    USGS Publications Warehouse

    Fulton, P.M.; Saffer, D.M.; Bekins, B.A.

    2009-01-01

    Many plate boundary faults, including the San Andreas Fault, appear to slip at unexpectedly low shear stress. One long-standing explanation for a "weak" San Andreas Fault is that fluid release by dehydration reactions during regional metamorphism generates elevated fluid pressures that are localized within the fault, reducing the effective normal stress. We evaluate this hypothesis by calculating realistic fluid production rates for the San Andreas Fault system, and incorporating them into 2-D fluid flow models. Our results show that for a wide range of permeability distributions, fluid sources from crustal dehydration are too small and short-lived to generate, sustain, or localize fluid pressures in the fault sufficient to explain its apparent mechanical weakness. This suggests that alternative mechanisms, possibly acting locally within the fault zone, such as shear compaction or thermal pressurization, may be necessary to explain a weak San Andreas Fault. More generally, our results demonstrate the difficulty of localizing large fluid pressures generated by regional processes within near-vertical fault zones. ?? 2009 Elsevier B.V.

  1. Publications - PIR 2011-1 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    content DGGS PIR 2011-1 Publication Details Title: Reconnaissance evaluation of the Lake Clark fault Koehler, R.D., and Reger, R.D., 2011, Reconnaissance evaluation of the Lake Clark fault, Tyonek area M) Keywords Cook Inlet; Glacial Stratigraphy; Lake Clark Fault; Neotectonics; STATEMAP Project Top

  2. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  3. Common faults and their impacts for rooftop air conditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of genericmore » rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed by an FDD system. The data set obtained during this work was very comprehensive, and was used to design and evaluate the performance of an FDD method that will be reported in a future paper.« less

  4. Mission Services Evolution Center Message Bus

    NASA Technical Reports Server (NTRS)

    Mayorga, Arturo; Bristow, John O.; Butschky, Mike

    2011-01-01

    The Goddard Mission Services Evolution Center (GMSEC) Message Bus is a robust, lightweight, fault-tolerant middleware implementation that supports all messaging capabilities of the GMSEC API. This architecture is a distributed software system that routes messages based on message subject names and knowledge of the locations in the network of the interested software components.

  5. Robust Model-Based Fault Diagnosis for DC Zonal Electrical Distribution System

    DTIC Science & Technology

    2007-06-01

    Conf. on Decision and Control, 1979, 149 [24] P. Balle, D. Juricic, A. Rakar and S. Ernst , "Identification of nonlinear processes and model based...Technology, IEEE Transactions on, vol. 12, pp. 183-192, 2004. [232] H. G. Kwatny, E. Mensah, D. Niebur and C. Teolis, "Optimal shipboard power

  6. Design Considerations for Human Rating of Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Parkinson, Douglas

    2010-01-01

    I.Human-rating is specific to each engine; a. Context of program/project must be understood. b. Engine cannot be discussed independently from vehicle and mission. II. Utilize a logical combination of design, manufacturing, and test approaches a. Design 1) It is crucial to know the potential ways a system can fail, and how a failure can propagate; 2) Fault avoidance, fault tolerance, DFMR, caution and warning all have roles to play. b. Manufacturing and Assembly; 1) As-built vs. as-designed; 2) Review procedures for assembly and maintenance periodically; and 3) Keep personnel trained and certified. c. There is no substitute for test: 1) Analytical tools are constantly advancing, but still need test data for anchoring assumptions; 2) Demonstrate robustness and explore sensitivities; 3) Ideally, flight will be encompassed by ground test experience. III. Consistency and repeatability is key in production a. Maintain robust processes and procedures for inspection and quality control based upon development and qualification experience; b. Establish methods to "spot check" quality and consistency in parts: 1) Dedicated ground test engines; 2) Random components pulled from the line/lot to go through "enhanced" testing.

  7. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  8. Hydrothermal activity at slow-spreading ridges: variability and importance of magmatic controls

    NASA Astrophysics Data System (ADS)

    Escartin, Javier

    2016-04-01

    Hydrothermal activity along mid-ocean ridge axes is ubiquitous, associated with mass, chemical, and heat exchanges between the deep lithosphere and the overlying envelopes, and sustaining chemiosynthetic ecosystems at the seafloor. Compared with hydrothermal fields at fast-spreading ridges, those at slow spreading ones show a large variability as their location and nature is controlled or influenced by several parameters that are inter-related: a) tectonic setting, ranging from 'volcanic systems' (along the rift valley floor, volcanic ridges, seamounts), to 'tectonic' ones (rift-bounding faults, oceanic detachment faults); b) the nature of the host rock, owing to compositional heterogeneity of slow-spreading lithosphere (basalt, gabbro, peridotite); c) the type of heat source (magmatic bodies at depth, hot lithosphere, serpentinization reactions); d) and the associated temperature of outflow fluids (high- vs.- low temperature venting and their relative proportion). A systematic review of the distribution and characteristics of hydrothermal fields along the slow-spreading Mid-Atlantic Ridge suggests that long-lived hydrothermal activity is concentrated either at oceanic detachment faults, or along volcanic segments with evidence of robust magma supply to the axis. A detailed study of the magmatically robust Lucky Strike segment suggests that all present and past hydrothermal activity is found at the center of the segment. The association of these fields to central volcanos, and the absence of indicators of hydrothermal activity along the remaining of the ridge segment, suggests that long-lived hydrothermal activity in these volcanic systems is maintained by the enhanced melt supply and the associated magma chamber(s) required to build these volcanic edifices. In this setting, hydrothermal outflow zones at the seafloor are systematically controlled by faults, indicating that hydrothermal fluids in the shallow crust exploit permeable fault zones to circulate. While less studied, similar hydrothermal systems are found elsewhere associated to other central volcanoes along the ridge axis (e.g., Menez Gwenn at the Mid-Atlantic Ridge and Soria Mornia or Troll Wall at the Arctic Ridges). Long-lived hydrothermal activity plays an important role in controlling the thermal structure of the lithosphere and its accretion at and near-axis, and also determining the distribution and biogeography of vent communities. Along slow-spreading segments, long-lived hydrothermal activity can be provided both by volcanic systems (e.g., Lucky Strike) and tectonic systems (oceanic detachment faults). While magmatic and hydrothermal activity is relatively well understood now in volcanic systems (e.g., Lucky Strike), tectonic systems (oceanic detachment faults) require further integrated studies to constrain the links between long-lived localization of deformation along oceanic detachment faults, hydrothermal activity, and origin and nature of off-axis heat sources animating hydrothermal circulation.

  9. New procedure for gear fault detection and diagnosis using instantaneous angular speed

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Xining; Wu, Jili

    2017-02-01

    Besides the extreme complexity of gear dynamics, the fault diagnosis results in terms of vibration signal are sometimes easily misled and even distorted by the interference of transmission channel or other components like bearings, bars. Recently, the research field of Instantaneous Angular Speed (IAS) has attracted significant attentions due to its own advantages over conventional vibration analysis. On the basis of IAS signal's advantages, this paper presents a new feature extraction method by combining the Empirical Mode Decomposition (EMD) and Autocorrelation Local Cepstrum (ALC) for fault diagnosis of sophisticated multistage gearbox. Firstly, as a pre-processing step, signal reconstruction is employed to address the oversampled issue caused by the high resolution of the angular sensor and the test speed. Then the adaptive EMD is used to acquire a number of Intrinsic Mode Functions (IMFs). Nevertheless, not all the IMFs are needed for the further analysis since different IMFs have different sensitivities to fault. Hence, the cosine similarity metric is introduced to select the most sensitive IMF. Even though, the sensitive IMF is still insufficient for the gear fault diagnosis due to the weakness of the fault component related to the gear fault. Therefore, as the final step, ALC is used for the purpose of signal de-noising and feature extraction. The effectiveness and robustness of the new approach has been validated experimentally on the basis of two gear test rigs with gears under different working conditions. Diagnosis results show that the new approach is capable of effectively handling the gear fault diagnosis i.e., the highlighted quefrency and its rahmonics corresponding to the rotary period and its multiple are displayed clearly in the cepstrum record of the proposed method.

  10. Upper Neogene stratigraphy and tectonics of Death Valley - A review

    USGS Publications Warehouse

    Knott, J.R.; Sarna-Wojcicki, A. M.; Machette, M.N.; Klinger, R.E.

    2005-01-01

    New tephrochronologic, soil-stratigraphic and radiometric-dating studies over the last 10 years have generated a robust numerical stratigraphy for Upper Neogene sedimentary deposits throughout Death Valley. Critical to this improved stratigraphy are correlated or radiometrically-dated tephra beds and tuffs that range in age from > 3.58 Ma to < 1.1 ka. These tephra beds and tuffs establish relations among the Upper Pliocene to Middle Pleistocene sedimentary deposits at Furnace Creek basin, Nova basin, Ubehebe-Lake Rogers basin, Copper Canyon, Artists Drive, Kit Fox Hills, and Confidence Hills. New geologic formations have been described in the Confidence Hills and at Mormon Point. This new geochronology also establishes maximum and minimum ages for Quaternary alluvial fans and Lake Manly deposits. Facies associated with the tephra beds show that ???3.3 Ma the Furnace Creek basin was a northwest-southeast-trending lake flanked by alluvial fans. This paleolake extended from the Furnace Creek to Ubehebe. Based on the new stratigraphy, the Death Valley fault system can be divided into four main fault zones: the dextral, Quaternary-age Northern Death Valley fault zone; the dextral, pre-Quaternary Furnace Creek fault zone; the oblique-normal Black Mountains fault zone; and the dextral Southern Death Valley fault zone. Post -3.3 Ma geometric, structural, and kinematic changes in the Black Mountains and Towne Pass fault zones led to the break up of Furnace Creek basin and uplift of the Copper Canyon and Nova basins. Internal kinematics of northern Death Valley are interpreted as either rotation of blocks or normal slip along the northeast-southwest-trending Towne Pass and Tin Mountain fault zones within the Eastern California shear zone. ?? 2005 Elsevier B.V. All rights reserved.

  11. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  12. Joint High-Order Synchrosqueezing Transform and Multi-Taper Empirical Wavelet Transform for Fault Diagnosis of Wind Turbine Planetary Gearbox under Nonstationary Conditions.

    PubMed

    Hu, Yue; Tu, Xiaotong; Li, Fucai; Meng, Guang

    2018-01-07

    Wind turbines usually operate under nonstationary conditions, such as wide-range speed fluctuation and time-varying load. Its critical component, the planetary gearbox, is prone to malfunction or failure, which leads to downtime and repair costs. Therefore, fault diagnosis and condition monitoring for the planetary gearbox in wind turbines is a vital research topic. Meanwhile, the signals measured by the vibration sensors mounted in the gearbox exhibit time-varying and nonstationary features. In this study, a novel time-frequency method based on high-order synchrosqueezing transform (SST) and multi-taper empirical wavelet transform (MTEWT) is proposed for the wind turbine planetary gearbox under nonstationary conditions. The high-order SST uses accurate instantaneous frequency approximations to obtain a sharper time-frequency representation (TFR). As the acquired signal consists of many components, like the meshing and rotating components of the gear and bearing, the fault component may be masked by other unrelated components. The MTEWT is used to separate the fault feature from the masking components. A variety of experimental signals of the wind turbine planetary gearbox under nonstationary conditions have been analyzed to demonstrate the effectiveness and robustness of the proposed method. Results show that the proposed method is effective in diagnosing both gear and bearing faults.

  13. Joint High-Order Synchrosqueezing Transform and Multi-Taper Empirical Wavelet Transform for Fault Diagnosis of Wind Turbine Planetary Gearbox under Nonstationary Conditions

    PubMed Central

    Li, Fucai; Meng, Guang

    2018-01-01

    Wind turbines usually operate under nonstationary conditions, such as wide-range speed fluctuation and time-varying load. Its critical component, the planetary gearbox, is prone to malfunction or failure, which leads to downtime and repair costs. Therefore, fault diagnosis and condition monitoring for the planetary gearbox in wind turbines is a vital research topic. Meanwhile, the signals measured by the vibration sensors mounted in the gearbox exhibit time-varying and nonstationary features. In this study, a novel time-frequency method based on high-order synchrosqueezing transform (SST) and multi-taper empirical wavelet transform (MTEWT) is proposed for the wind turbine planetary gearbox under nonstationary conditions. The high-order SST uses accurate instantaneous frequency approximations to obtain a sharper time-frequency representation (TFR). As the acquired signal consists of many components, like the meshing and rotating components of the gear and bearing, the fault component may be masked by other unrelated components. The MTEWT is used to separate the fault feature from the masking components. A variety of experimental signals of the wind turbine planetary gearbox under nonstationary conditions have been analyzed to demonstrate the effectiveness and robustness of the proposed method. Results show that the proposed method is effective in diagnosing both gear and bearing faults. PMID:29316668

  14. Real-Time Monitoring and Fault Diagnosis of a Low Power Hub Motor Using Feedforward Neural Network.

    PubMed

    Şimşir, Mehmet; Bayır, Raif; Uyaroğlu, Yılmaz

    2016-01-01

    Low power hub motors are widely used in electromechanical systems such as electrical bicycles and solar vehicles due to their robustness and compact structure. Such systems driven by hub motors (in wheel motors) encounter previously defined and undefined faults under operation. It may inevitably lead to the interruption of the electromechanical system operation; hence, economic losses take place at certain times. Therefore, in order to maintain system operation sustainability, the motor should be precisely monitored and the faults are diagnosed considering various significant motor parameters. In this study, the artificial feedforward backpropagation neural network approach is proposed to real-time monitor and diagnose the faults of the hub motor by measuring seven main system parameters. So as to construct a necessary model, we trained the model, using a data set consisting of 4160 samples where each has 7 parameters, by the MATLAB environment until the best model is obtained. The results are encouraging and meaningful for the specific motor and the developed model may be applicable to other types of hub motors. The prosperous model of the whole system was embedded into Arduino Due microcontroller card and the mobile real-time monitoring and fault diagnosis system prototype for hub motor was designed and manufactured.

  15. Real-Time Monitoring and Fault Diagnosis of a Low Power Hub Motor Using Feedforward Neural Network

    PubMed Central

    Şimşir, Mehmet; Bayır, Raif; Uyaroğlu, Yılmaz

    2016-01-01

    Low power hub motors are widely used in electromechanical systems such as electrical bicycles and solar vehicles due to their robustness and compact structure. Such systems driven by hub motors (in wheel motors) encounter previously defined and undefined faults under operation. It may inevitably lead to the interruption of the electromechanical system operation; hence, economic losses take place at certain times. Therefore, in order to maintain system operation sustainability, the motor should be precisely monitored and the faults are diagnosed considering various significant motor parameters. In this study, the artificial feedforward backpropagation neural network approach is proposed to real-time monitor and diagnose the faults of the hub motor by measuring seven main system parameters. So as to construct a necessary model, we trained the model, using a data set consisting of 4160 samples where each has 7 parameters, by the MATLAB environment until the best model is obtained. The results are encouraging and meaningful for the specific motor and the developed model may be applicable to other types of hub motors. The prosperous model of the whole system was embedded into Arduino Due microcontroller card and the mobile real-time monitoring and fault diagnosis system prototype for hub motor was designed and manufactured. PMID:26819590

  16. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    DTIC Science & Technology

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  17. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 1: FTMP principles of operation

    NASA Technical Reports Server (NTRS)

    Smith, T. B., Jr.; Lala, J. H.

    1983-01-01

    The basic organization of the fault tolerant multiprocessor, (FTMP) is that of a general purpose homogeneous multiprocessor. Three processors operate on a shared system (memory and I/O) bus. Replication and tight synchronization of all elements and hardware voting is employed to detect and correct any single fault. Reconfiguration is then employed to repair a fault. Multiple faults may be tolerated as a sequence of single faults with repair between fault occurrences.

  18. Neural networks and fault probability evaluation for diagnosis issues.

    PubMed

    Kourd, Yahia; Lefebvre, Dimitri; Guersi, Noureddine

    2014-01-01

    This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method.

  19. Fault Tree in the Trenches, A Success Story

    NASA Technical Reports Server (NTRS)

    Long, R. Allen; Goodson, Amanda (Technical Monitor)

    2000-01-01

    Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.

  20. A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Finelli, George B.

    1987-01-01

    Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.

  1. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  2. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  3. Fault diagnostic instrumentation design for environmental control and life support systems

    NASA Technical Reports Server (NTRS)

    Yang, P. Y.; You, K. C.; Wynveen, R. A.; Powell, J. D., Jr.

    1979-01-01

    As a development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. As part of continous development efforts, a program to evaluate, design, and demonstrate advanced instrumentation fault diagnostics was successfully completed. Fault tolerance designs for reliability and other instrumenation capabilities to increase maintainability were evaluated and studied.

  4. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  5. Safety Analysis and Protection Measures of the Control System of the Pulsed High Magnetic Field Facility in WHMFC

    NASA Astrophysics Data System (ADS)

    Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.

    2013-03-01

    A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.

  6. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  7. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  8. AEGIS: a robust and scalable real-time public health surveillance system.

    PubMed

    Reis, Ben Y; Kirby, Chaim; Hadden, Lucy E; Olson, Karen; McMurry, Andrew J; Daniel, James B; Mandl, Kenneth D

    2007-01-01

    In this report, we describe the Automated Epidemiological Geotemporal Integrated Surveillance system (AEGIS), developed for real-time population health monitoring in the state of Massachusetts. AEGIS provides public health personnel with automated near-real-time situational awareness of utilization patterns at participating healthcare institutions, supporting surveillance of bioterrorism and naturally occurring outbreaks. As real-time public health surveillance systems become integrated into regional and national surveillance initiatives, the challenges of scalability, robustness, and data security become increasingly prominent. A modular and fault tolerant design helps AEGIS achieve scalability and robustness, while a distributed storage model with local autonomy helps to minimize risk of unauthorized disclosure. The report includes a description of the evolution of the design over time in response to the challenges of a regional and national integration environment.

  9. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  10. Transform Faults and Lithospheric Structure: Insights from Numerical Models and Shipboard and Geodetic Observations

    NASA Astrophysics Data System (ADS)

    Takeuchi, Christopher S.

    In this dissertation, I study the influence of transform faults on the structure and deformation of the lithosphere, using shipboard and geodetic observations as well as numerical experiments. I use marine topography, gravity, and magnetics to examine the effects of the large age-offset Andrew Bain transform fault on accretionary processes within two adjacent segments of the Southwest Indian Ridge. I infer from morphology, high gravity, and low magnetization that the extremely cold and thick lithosphere associated with the Andrew Bain strongly suppresses melt production and crustal emplacement to the west of the transform fault. These effects are counteracted by enhanced temperature and melt production near the Marion Hotspot, east of the transform fault. I use numerical models to study the development of lithospheric shear zones underneath continental transform faults (e.g. the San Andreas Fault in California), with a particular focus on thermomechanical coupling and shear heating produced by long-term fault slip. I find that these processes may give rise to long-lived localized shear zones, and that such shear zones may in part control the magnitude of stress in the lithosphere. Localized ductile shear participates in both interseismic loading and postseismic relaxation, and predictions of models including shear zones are within observational constraints provided by geodetic and surface heat flow data. I numerically investigate the effects of shear zones on three-dimensional postseismic deformation. I conclude that the presence of a thermally-activated shear zone minimally impacts postseismic deformation, and that thermomechanical coupling alone is unable to generate sufficient localization for postseismic relaxation within a ductile shear zone to kinematically resemble that by aseismic fault creep (afterslip). I find that the current record geodetic observations of postseismic deformation do not provide robust discriminating power between candidate linear and power-law rheologies for the sub-Mojave Desert mantle, but longer observations may potentially allow such discrimination.

  11. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)

  12. A seismological overview of the induced earthquakes in the Duvernay play near Fox Creek, Alberta

    NASA Astrophysics Data System (ADS)

    Schultz, Ryan; Wang, Ruijia; Gu, Yu Jeffrey; Haug, Kristine; Atkinson, Gail

    2017-01-01

    This paper summarizes the current state of understanding regarding the induced seismicity in connection with hydraulic fracturing operations targeting the Duvernay Formation in central Alberta, near the town of Fox Creek. We demonstrate that earthquakes in this region cluster into distinct sequences in time, space, and focal mechanism using (i) cross-correlation detection methods to delineate transient temporal relationships, (ii) double-difference relocations to confirm spatial clustering, and (iii) moment tensor solutions to assess fault motion consistency. The spatiotemporal clustering of the earthquake sequences is strongly related to the nearby hydraulic fracturing operations. In addition, we identify a preference for strike-slip motions on subvertical faults with an approximate 45° P axis orientation, consistent with expectation from the ambient stress field. The hypocentral geometries for two of the largest-magnitude (M 4) sequences that are robustly constrained by local array data provide compelling evidence for planar features starting at Duvernay Formation depths and extending into the shallow Precambrian basement. We interpret these lineaments as subvertical faults orientated approximately north-south, consistent with the regional moment tensor solutions. Finally, we conclude that the sequences were triggered by pore pressure increases in response to hydraulic fracturing stimulations along previously existing faults.

  13. Rolling element bearings diagnostics using the Symbolic Aggregate approXimation

    NASA Astrophysics Data System (ADS)

    Georgoulas, George; Karvelis, Petros; Loutas, Theodoros; Stylios, Chrysostomos D.

    2015-08-01

    Rolling element bearings are a very critical component in various engineering assets. Therefore it is of paramount importance the detection of possible faults, especially at an early stage, that may lead to unexpected interruptions of the production or worse, to severe accidents. This research work introduces a novel, in the field of bearing fault detection, method for the extraction of diagnostic representations of vibration recordings using the Symbolic Aggregate approXimation (SAX) framework and the related intelligent icons representation. SAX essentially transforms the original real valued time-series into a discrete one, which is then represented by a simple histogram form summarizing the occurrence of the chosen symbols/words. Vibration signals from healthy bearings and bearings with three different fault locations and with three different severity levels, as well as loading conditions, are analyzed. Considering the diagnostic problem as a classification one, the analyzed vibration signals and the resulting feature vectors feed simple classifiers achieving remarkably high classification accuracies. Moreover a sliding window scheme combined with a simple majority voting filter further increases the reliability and robustness of the diagnostic method. The results encourage the potential use of the proposed methodology for the diagnosis of bearing faults.

  14. Agent Based Fault Tolerance for the Mobile Environment

    NASA Astrophysics Data System (ADS)

    Park, Taesoon

    This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.

  15. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  16. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Experimental investigation into the fault response of superconducting hybrid electric propulsion electrical power system to a DC rail to rail fault

    NASA Astrophysics Data System (ADS)

    Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.

    2017-12-01

    Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.

  18. Automatic characteristic frequency association and all-sideband demodulation for the detection of a bearing fault

    NASA Astrophysics Data System (ADS)

    Firla, Marcin; Li, Zhong-Yang; Martin, Nadine; Pachaud, Christian; Barszcz, Tomasz

    2016-12-01

    This paper proposes advanced signal-processing techniques to improve condition monitoring of operating machines. The proposed methods use the results of a blind spectrum interpretation that includes harmonic and sideband series detection. The first contribution of this study is an algorithm for automatic association of harmonic and sideband series to characteristic fault frequencies according to a kinematic configuration. The approach proposed has the advantage of taking into account a possible slip of the rolling-element bearings. In the second part, we propose a full-band demodulation process from all sidebands that are relevant to the spectral estimation. To do so, a multi-rate filtering process in an iterative schema provides satisfying precision and stability over the targeted demodulation band, even for unsymmetrical and extremely narrow bands. After synchronous averaging, the filtered signal is demodulated for calculation of the amplitude and frequency modulation functions, and then any features that indicate faults. Finally, the proposed algorithms are validated on vibration signals measured on a test rig that was designed as part of the European Innovation Project 'KAStrion'. This rig simulates a wind turbine drive train at a smaller scale. The data show the robustness of the method for localizing and extracting a fault on the main bearing. The evolution of the proposed features is a good indicator of the fault severity.

  19. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  20. Synchronization of multiple 3-DOF helicopters under actuator faults and saturations with prescribed performance.

    PubMed

    Yang, Huiliao; Jiang, Bin; Yang, Hao; Liu, Hugh H T

    2018-04-01

    The distributed cooperative control strategy is proposed to make the networked nonlinear 3-DOF helicopters achieve the attitude synchronization in the presence of actuator faults and saturations. Based on robust adaptive control, the proposed control method can both compensate the uncertain partial loss of control effectiveness and deal with the system uncertainties. To address actuator saturation problem, the control scheme is designed to ensure that the saturation constraint on the actuation will not be violated during the operation in spite of the actuator faults. It is shown that with the proposed control strategy, both the tracking errors of the leading helicopter and the attitude synchronization errors of each following helicopter are bounded in the existence of faulty actuators and actuator saturations. Moreover, the state responses of the entire group would not exceed the predesigned performance functions which are totally independent from the underlaying interaction topology. Simulation results illustrate the effectiveness of the proposed control scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.

  2. Preservation of amorphous ultrafine material: A proposed proxy for slip during recent earthquakes on active faults

    NASA Astrophysics Data System (ADS)

    Hirono, Tetsuro; Asayama, Satoru; Kaneki, Shunya; Ito, Akihiro

    2016-11-01

    The criteria for designating an “Active Fault” not only are important for understanding regional tectonics, but also are a paramount issue for assessing the earthquake risk of faults that are near important structures such as nuclear power plants. Here we propose a proxy, based on the preservation of amorphous ultrafine particles, to assess fault activity within the last millennium. X-ray diffraction data and electron microscope observations of samples from an active fault demonstrated the preservation of large amounts of amorphous ultrafine particles in two slip zones that last ruptured in 1596 and 1999, respectively. A chemical kinetic evaluation of the dissolution process indicated that such particles could survive for centuries, which is consistent with the observations. Thus, preservation of amorphous ultrafine particles in a fault may be valuable for assessing the fault’s latest activity, aiding efforts to evaluate faults that may damage critical facilities in tectonically active zones.

  3. A model-based executive for commanding robot teams

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    The paper presents a way to robustly command a system of systems as a single entity. Instead of modeling each component system in isolation and then manually crafting interaction protocols, this approach starts with a model of the collective population as a single system. By compiling the model into separate elements for each component system and utilizing a teamwork model for coordination, it circumvents the complexities of manually crafting robust interaction protocols. The resulting systems are both globally responsive by virtue of a team oriented interaction model and locally responsive by virtue of a distributed approach to model-based fault detection, isolation, and recovery.

  4. Pore-pressure sensitivities to dynamic strains: observations in active tectonic regions

    USGS Publications Warehouse

    Barbour, Andrew J.

    2015-01-01

    Triggered seismicity arising from dynamic stresses is often explained by the Mohr-Coulomb failure criterion, where elevated pore pressures reduce the effective strength of faults in fluid-saturated rock. The seismic response of a fluid-rock system naturally depends on its hydro-mechanical properties, but accurately assessing how pore-fluid pressure responds to applied stress over large scales in situ remains a challenging task; hence, spatial variations in response are not well understood, especially around active faults. Here I analyze previously unutilized records of dynamic strain and pore-pressure from regional and teleseismic earthquakes at Plate Boundary Observatory (PBO) stations from 2006 through 2012 to investigate variations in response along the Pacific/North American tectonic plate boundary. I find robust scaling-response coefficients between excess pore pressure and dynamic strain at each station that are spatially correlated: around the San Andreas and San Jacinto fault systems, the response is lowest in regions of the crust undergoing the highest rates of secular shear strain. PBO stations in the Parkfield instrument cluster are at comparable distances to the San Andreas fault (SAF), and spatial variations there follow patterns in dextral creep rates along the fault, with the highest response in the actively creeping section, which is consistent with a narrowing zone of strain accumulation seen in geodetic velocity profiles. At stations in the San Juan Bautista (SJB) and Anza instrument clusters, the response depends non-linearly on the inverse fault-perpendicular distance, with the response decreasing towards the fault; the SJB cluster is at the northern transition from creeping-to-locked behavior along the SAF, where creep rates are at moderate to low levels, and the Anza cluster is around the San Jacinto fault, where to date there have been no statistically significant creep rates observed at the surface. These results suggest that the strength of the pore pressure response in fluid-saturated rock near active faults is controlled by shear strain accumulation associated with tectonic loading, which implies a strong feedback between fault strength and permeability: dynamic triggering susceptibilities may vary in space and also in time.

  5. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  6. Reliable spacecraft rendezvous without velocity measurement

    NASA Astrophysics Data System (ADS)

    He, Shaoming; Lin, Defu

    2018-03-01

    This paper investigates the problem of finite-time velocity-free autonomous rendezvous for spacecraft in the presence of external disturbances during the terminal phase. First of all, to address the problem of lack of relative velocity measurement, a robust observer is proposed to estimate the unknown relative velocity information in a finite time. It is shown that the effect of external disturbances on the estimation precision can be suppressed to a relatively low level. With the reconstructed velocity information, a finite-time output feedback control law is then formulated to stabilize the rendezvous system. Theoretical analysis and rigorous proof show that the relative position and its rate can converge to a small compacted region in finite time. Numerical simulations are performed to evaluate the performance of the proposed approach in the presence of external disturbances and actuator faults.

  7. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  8. An evaluation of a real-time fault diagnosis expert system for aircraft applications

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.

    1987-01-01

    A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.

  9. Fault-tolerant Greenberger-Horne-Zeilinger paradox based on non-Abelian anyons.

    PubMed

    Deng, Dong-Ling; Wu, Chunfeng; Chen, Jing-Ling; Oh, C H

    2010-08-06

    We propose a scheme to test the Greenberger-Horne-Zeilinger paradox based on braidings of non-Abelian anyons, which are exotic quasiparticle excitations of topological states of matter. Because topological ordered states are robust against local perturbations, this scheme is in some sense "fault-tolerant" and might close the detection inefficiency loophole problem in previous experimental tests of the Greenberger-Horne-Zeilinger paradox. In turn, the construction of the Greenberger-Horne-Zeilinger paradox reveals the nonlocal property of non-Abelian anyons. Our results indicate that the non-Abelian fractional statistics is a pure quantum effect and cannot be described by local realistic theories. Finally, we present a possible experimental implementation of the scheme based on the anyonic interferometry technologies.

  10. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  11. Modeling the evolution of a ramp-flat-ramp thrust system: A geological application of DynEarthSol2D

    NASA Astrophysics Data System (ADS)

    Feng, L.; Choi, E.; Bartholomew, M. J.

    2013-12-01

    DynEarthSol2D (available at http://bitbucket.org/tan2/dynearthsol2) is a robust, adaptive, two-dimensional finite element code that solves the momentum balance and the heat equation in Lagrangian form using unstructured meshes. Verified in a number of benchmark problems, this solver uses contingent mesh adaptivity in places where shear strain is focused (localization) and a conservative mapping assisted by marker particles to preserve phase and facies boundaries during remeshing. We apply this cutting-edge geodynamic modeling tool to the evolution of a thrust fault with a ramp-flat-ramp geometry. The overall geometry of the fault is constrained by observations in the northern part of the southern Appalachian fold and thrust belt. Brittle crust is treated as a Mohr-Coulomb plastic material. The thrust fault is a zone of a finite thickness but has a lower cohesion and friction angle than its surrounding rocks. When an intervening flat separates two distinct sequential ramps crossing different stratigraphic intervals, the thrust system will experience more complex deformations than those from a single thrust fault ramp. The resultant deformations associated with sequential ramps would exhibit a spectrum of styles, of which two end members correspond to ';overprinting' and ';interference'. Reproducing these end-member styles as well as intermediate ones, our models show that the relative importance of overprinting versus interference is a sensitive function of initial fault geometry and hanging wall displacement. We further present stress and strain histories extracted from the models. If clearly distinguishable, they will guide the interpretation of field observations on thrust faults.

  12. Strong ground motions generated by earthquakes on creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  13. Fault latency in the memory - An experimental study on VAX 11/780

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1986-01-01

    Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

  14. Dealing with completeness, structural hierarchy, and seismic coupling issues: three major challenges for #Fault2SHA

    NASA Astrophysics Data System (ADS)

    Valensise, Gianluca; Barba, Salvatore; Basili, Roberto; Bonini, Lorenzo; Burrato, Pierfrancesco; Carafa, Michele; Kastelic, Vanja; Fracassi, Umberto; Maesano, Francesco Emanuele; Tarabusi, Gabriele; Tiberti, Mara Monica; Vannoli, Paola

    2016-04-01

    The vast majority of active faulting studies are performed at the scale of individual, presumably seismogenic faults or fault strands. Most SHA approaches and models, however, require homogeneus information on potential earthquake sources over the entire tectonic domain encompassing the site(s) of interest. Although it is out of question that accurate SHA must rely on robust investigations of individual potential earthquake sources, it is only by gathering this information in regionally extensive databases that one can address some of the most outstanding issues in the use of #Fault2SHA. We will briefly recall three issues that are particularly relevant in the investigation of seismogenic faulting in southern Europe. A fundamental challenge is the completeness of the geologic record of active faulting. In most tectonic environments many potential seismogenic faults are blind or hidden, or deform the lower crust without leaving a discernible signal at the surface, or occur offshore, or slip so slowly that nontectonic erosional-depositional processes easily outpace their surface effects. Investigating only well-expressed faults is scientifically rewarding but also potentially misleading as it draws attention on the least insidious faults, leading to a potential underestimation of the regional earthquake potential. A further issue concerns the hierarchy of fault systems. Most active faults do not comprise seismogenic sources per se but are part of larger systems, and slip only in conjunction with the master fault of each system. In the most insidious cases, only secondary faults are expressed at the surface while the master fault lies hidden beneath them. This may result in an overestimation of the true number of seismogenic sources that occur in each region and in a biased identification of the characteristics of the main player in each system. Recent investigations of geologic and geodetic vs earthquake release budgets have shown that the "seismic coupling", which quantifies the fraction of tectonic fault slip that is turned into earthquake moment release, may be significantly smaller than 100%, particularly in contractional tectonic settings. Also this especially elusive circumstance may result in an overestimation of the true earthquake potential of specific areas. All these circumstances are the source of fundamental epistemic uncertainties that are extremely difficult to be dealt with standard approaches, which normally focus on the variability of the parameters of major faults whose seismogenic nature is well established. In summary, the current generation of earthquake geologists should definitely make a turn toward #Fault2SHA and contribute their data for improving current seismic hazard models. To achieve this goal, however, they should first (a) step back from the surface fault(s) and adopt a broader tectonic, geomorphic and three-dimensional perspective that encompasses at least the entire fault system being investigated; (b) make a more extensive use of subsurface evidence, focusing on the nature and geometry of depositional bodies rather than simply on brittle faulting; and (c) broaden their own perspective of the seismic cycle, comparing the (often incomplete) geological and geomorphic evidence with the (similarly incomplete) seismicity and geodetic records.

  15. Self adaptive multi-scale morphology AVG-Hat filter and its application to fault feature extraction for wheel bearing

    NASA Astrophysics Data System (ADS)

    Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang

    2017-04-01

    Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.

  16. Uncertainty quantification for evaluating the impacts of fracture zone on pressure build-up and ground surface uplift during geological CO₂ sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Jie; Hou, Zhangshuan; Fang, Yilin

    2015-06-01

    A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO₂ injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts onmore » geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10⁻¹⁵ m² in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.« less

  17. Integrated petrographic - rock mechanic borecore study from the metamorphic basement of the Pannonian Basin, Hungary

    NASA Astrophysics Data System (ADS)

    Molnár, László; Vásárhelyi, Balázs; Tóth, Tivadar M.; Schubert, Félix

    2015-01-01

    The integrated evaluation of borecores from the Mezősas-Furta fractured metamorphic hydrocarbon reservoir suggests significantly distinct microstructural and rock mechanical features within the analysed fault rock samples. The statistical evaluation of the clast geometries revealed the dominantly cataclastic nature of the samples. Damage zone of the fault can be characterised by an extremely brittle nature and low uniaxial compressive strength, coupled with a predominately coarse fault breccia composition. In contrast, the microstructural manner of the increasing deformation coupled with higher uniaxial compressive strength, strain-hardening nature and low brittleness indicate a transitional interval between the weakly fragmented damage zone and strongly grinded fault core. Moreover, these attributes suggest this unit is mechanically the strongest part of the fault zone. Gougerich cataclasites mark the core zone of the fault, with their widespread plastic nature and locally pseudo-ductile microstructure. Strain localization tends to be strongly linked with the existence of fault gouge ribbons. The fault zone with ˜15 m total thickness can be defined as a significant migration pathway inside the fractured crystalline reservoir. Moreover, as a consequence of the distributed nature of the fault core, it may possibly have a key role in compartmentalisation of the local hydraulic system.

  18. Fault detection and accommodation testing on an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.

  19. Geological and seismological survey for new design-basis earthquake ground motion of Kashiwazaki-Kariwa NPS

    NASA Astrophysics Data System (ADS)

    Takao, M.; Mizutani, H.

    2009-05-01

    At about 10:13 on July 16, 2007, a strong earthquake named 'Niigata-ken Chuetsu-oki Earthquake' of Mj6.8 on Japan Meteorological Agencyfs scale occurred offshore Niigata prefecture in Japan. However, all of the nuclear reactors at Kashiwazaki-Kariwa Nuclear Power Station (KKNPS) in Niigata prefecture operated by Tokyo Electric Power Company shut down safely. In other words, automatic safety function composed of shutdown, cooling and containment worked as designed immediately after the earthquake. During the earthquake, the peak acceleration of the ground motion exceeded the design-basis ground motion (DBGM), but the force due to the earthquake applied to safety-significant facilities was about the same as or less than the design basis taken into account as static seismic force. In order to assess anew the safety of nuclear power plants, we have evaluated a new DBGM after conducting geomorphological, geological, geophysical, seismological survey and analyses. [Geomorphological, Geological and Geophysical survey] In the land area, aerial photograph interpretation was performed at least within the 30km radius to extract geographies that could possibly be tectonic reliefs as a geomorphological survey. After that, geological reconnaissance was conducted to confirm whether the extracted landforms are tectonic reliefs or not. Especially we carefully investigated Nagaoka Plain Western Boundary Fault Zone (NPWBFZ), which consists of Kakuda-Yahiko fault, Kihinomiya fault and Katakai fault, because NPWBFZ is the one of the active faults which have potential of Mj8 class in Japan. In addition to the geological survey, seismic reflection prospecting of approximate 120km in total length was completed to evaluate the geological structure of the faults and to assess the consecutiveness of the component faults of NPWBFZ. As a result of geomorphological, geological and geophysical surveys, we evaluated that the three component faults of NPWBFZ are independent to each other from the viewpoint of geological structure, however we have decided to take into consideration simultaneous movement of the three faults which is 91km long in seismic design as a case of uncertainty. In the sea area, we conducted seismic reflection prospecting with sonic wave in the area stretching for about 140km along the coastline and 50km in the direction of perpendicular to the coastline. When we analyze the seismic profiles, we evaluated the activities of faults and foldings carefully on the basis of the way of thinking of 'fault-related-fault' because the sedimentary layers in the offing of Niigata prefecture are very thick and the geological structures are characterized by foldings. As a result of the seismic reflection survey and analyses, we assess that five active faults (foldings) to be taken into consideration to seismic design in the sea area and we evaluated that the F-B fault of 36km will have the largest impact on the KKNPS. [Seismological survey] As a result of analyses of the geological survey, data from NCOE and data from 2004 Chuetsu Earthquake, it became clear that there are factors that intensifies seismic motions in this area. For each of the two selected earthquake sources, namely NPWBFZ and F-B fault, we calculated seismic ground motions on the free surface of the base stratum as the design-basis ground motion (DBGM) Ss, using both empirical and numerical ground motion evaluation method. PGA value of DBGM is 2,300Gal for unit 1 to 4 located in the southern part of the KKNPS and 1,050Gal for unit 5 to 7 in the northern part of the site.

  20. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    NASA Astrophysics Data System (ADS)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  1. Geophysical expression of the Ghost Dance fault, Yucca Mountain, Nevada

    USGS Publications Warehouse

    Ponce, D.A.; Langenheim, V.E.; ,

    1995-01-01

    Gravity and ground magnetic data collected along surveyed traverses across Antler and Live Yucca Ridges, on the eastern flank of Yucca Mountain, Nevada, reveal small-scale faulting associated with the Ghost Dance and possibly other faults. These studies are part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain.

  2. Geophysical expression of the Ghost Dance Fault, Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Langenheim, V.E.

    1995-12-01

    Gravity and ground magnetic data collected along surveyed traverses across Antler and Live Yucca Ridges, on the eastern flank of Yucca Mountain, Nevada, reveal small-scale faulting associated with the Ghost Dance and possibly other faults. These studies are part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain.

  3. Implanted component faults and their effects on gas turbine engine performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacLeod, J.D.; Taylor, V.; Laflamme, J.C.G.

    Under the sponsorship of the Canadian Department of National Defence, the Engine Laboratory of the National Research Council of Canada (NRCC) has established a program for the evaluation of component deterioration on gas turbine engine performance. The effect is aimed at investigating the effects of typical in-service faults on the performance characteristics of each individual engine component. The objective of the program is the development of a generalized fault library, which will be used with fault identification techniques in the field, to reduce unscheduled maintenance. To evaluate the effects of implanted faults on the performance of a single spool engine,more » such as an Allison T56 turboprop engine, a series of faulted parts were installed. For this paper the following faults were analyzed: (a) first-stage turbine nozzle erosion damage; (b) first-stage turbine rotor blade untwist; (c) compressor seal wear; (d) first and second-stage compressor blade tip clearance increase. This paper describes the project objectives, the experimental installation, and the results of the fault implantation on engine performance. Discussed are performance variations on both engine and component characteristics. As the performance changes were significant, a rigorous measurement uncertainty analysis is included.« less

  4. Data-Centric Situational Awareness and Management in Intelligent Power Systems

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoxiao

    The rapid development of technology and society has made the current power system a much more complicated system than ever. The request for big data based situation awareness and management becomes urgent today. In this dissertation, to respond to the grand challenge, two data-centric power system situation awareness and management approaches are proposed to address the security problems in the transmission/distribution grids and social benefits augmentation problem at the distribution-customer lever, respectively. To address the security problem in the transmission/distribution grids utilizing big data, the first approach provides a fault analysis solution based on characterization and analytics of the synchrophasor measurements. Specically, the optimal synchrophasor measurement devices selection algorithm (OSMDSA) and matching pursuit decomposition (MPD) based spatial-temporal synchrophasor data characterization method was developed to reduce data volume while preserving comprehensive information for the big data analyses. And the weighted Granger causality (WGC) method was investigated to conduct fault impact causal analysis during system disturbance for fault localization. Numerical results and comparison with other methods demonstrate the effectiveness and robustness of this analytic approach. As more social effects are becoming important considerations in power system management, the goal of situation awareness should be expanded to also include achievements in social benefits. The second approach investigates the concept and application of social energy upon the University of Denver campus grid to provide management improvement solutions for optimizing social cost. Social element--human working productivity cost, and economic element--electricity consumption cost, are both considered in the evaluation of overall social cost. Moreover, power system simulation, numerical experiments for smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied for implementing the interactive artificial-physical management scheme.

  5. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  6. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  7. Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management.

    PubMed

    Barbarella, Maurizio; D'Amico, Fabrizio; De Blasiis, Maria Rosaria; Di Benedetto, Alessandro; Fiani, Margherita

    2017-12-26

    The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too.

  8. Study of a phase-to-ground fault on a 400 kV overhead transmission line

    NASA Astrophysics Data System (ADS)

    Iagăr, A.; Popa, G. N.; Diniş, C. M.

    2018-01-01

    Power utilities need to supply their consumers at high power quality level. Because the faults that occur on High-Voltage and Extra-High-Voltage transmission lines can cause serious damages in underlying transmission and distribution systems, it is important to examine each fault in detail. In this work we studied a phase-to-ground fault (on phase 1) of 400 kV overhead transmission line Mintia-Arad. Indactic® 650 fault analyzing system was used to record the history of the fault. Signals (analog and digital) recorded by Indactic® 650 were visualized and analyzed by Focus program. Summary of fault report allowed evaluation of behavior of control and protection equipment and determination of cause and location of the fault.

  9. New Insight into the Role of Tectonics versus Gravitational Deformation in Development of Surface Ruptures along the Ragged Mountain Fault, Katalla, Alaska USA: Applications of High-Resolution Three-Dimensional Terrain Models

    NASA Astrophysics Data System (ADS)

    Heinlein, S. N.; Pavlis, T. L.; Bruhn, R. L.; McCalpin, J. P.

    2017-12-01

    This study evaluates a surface structure using 3D visualization of LiDAR and aerial photography then analyzes these datasets using structure mapping techniques. Results provide new insight into the role of tectonics versus gravitational deformation. The study area is located in southern Alaska in the western edge of the St. Elias Orogen where the Yakutat microplate is colliding into Alaska. Computer applications were used to produce 3D terrain models to create a kinematic assessment of the Ragged Mountain fault which trends along the length of the east flank of Ragged Mountain. The area contains geomorphic and structural features which are utilize to determine the type of displacement on the fault. Previous studies described the Ragged Mountain fault as a very shallow (8°), west-dipping thrust fault that reactivated in the Late Holocene by westward-directed gravity sliding and inferred at least 180 m of normal slip, in a direction opposite to the (relative) eastward thrust transport of the structure inferred from stratigraphic juxtaposition. More recently this gravity sliding hypothesis has been questioned and this study evaluates one of these alternative hypotheses; that uphill facing normal fault-scarps along the Ragged Mountain fault trace represent extension above a buried ramp in a thrust and is evaluated with a fault-parallel flow model of hanging-wall folding and extension. Profiles across the scarp trace were used to illustrate the curvature of the topographic surfaces adjacent to the scarps system and evaluate their origin. This simple kinematic model tests the hypothesis that extensional fault scarps at the surface are produced by flexure above a deeper ramp in a largely blind thrust system. The data in the context of this model implies that the extensional scarp structures previously examined represent a combination of erosionally modified features overprinted by flexural extension above a thrust system. Analyses of scarp heights along the structure are combined with the model to suggest a decrease in Holocene slip from south to north along the Ragged Mountain fault from 11.3 m to 0.2 m, respectively.

  10. High level organizing principles for display of systems fault information for commercial flight crews

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  11. Fault zone structure and kinematics from lidar, radar, and imagery: revealing new details along the creeping San Andreas Fault

    NASA Astrophysics Data System (ADS)

    DeLong, S.; Donnellan, A.; Pickering, A.

    2017-12-01

    Aseismic fault creep, coseismic fault displacement, distributed deformation, and the relative contribution of each have important bearing on infrastructure resilience, risk reduction, and the study of earthquake physics. Furthermore, the impact of interseismic fault creep in rupture propagation scenarios, and its impact and consequently on fault segmentation and maximum earthquake magnitudes, is poorly resolved in current rupture forecast models. The creeping section of the San Andreas Fault (SAF) in Central California is an outstanding area for establishing methodology for future scientific response to damaging earthquakes and for characterizing the fine details of crustal deformation. Here, we describe how data from airborne and terrestrial laser scanning, airborne interferometric radar (UAVSAR), and optical data from satellites and UAVs can be used to characterize rates and map patterns of deformation within fault zones of varying complexity and geomorphic expression. We are evaluating laser point cloud processing, photogrammetric structure from motion, radar interferometry, sub-pixel correlation, and other techniques to characterize the relative ability of each to measure crustal deformation in two and three dimensions through time. We are collecting new and synthesizing existing data from the zone of highest interseismic creep rates along the SAF where a transition from a single main fault trace to a 1-km wide extensional stepover occurs. In the stepover region, creep measurements from alignment arrays 100 meters long across the main fault trace reveal lower rates than those in adjacent, geomorphically simpler parts of the fault. This indicates that deformation is distributed across the en echelon subsidiary faults, by creep and/or stick-slip behavior. Our objectives are to better understand how deformation is partitioned across a fault damage zone, how it is accommodated in the shallow subsurface, and to better characterize the relative amounts of fault creep and potential stick-slip fault behavior across the plate boundary at these sites in order to evaluate the potential for rupture propagation in large earthquakes.

  12. Experimental Evaluation of a Structure-Based Connectionist Network for Fault Diagnosis of Helicopter Gearboxes

    NASA Technical Reports Server (NTRS)

    Jammu, V. B.; Danai, K.; Lewicki, D. G.

    1998-01-01

    This paper presents the experimental evaluation of the Structure-Based Connectionist Network (SBCN) fault diagnostic system introduced in the preceding article. For this vibration data from two different helicopter gearboxes: OH-58A and S-61, are used. A salient feature of SBCN is its reliance on the knowledge of the gearbox structure and the type of features obtained from processed vibration signals as a substitute to training. To formulate this knowledge, approximate vibration transfer models are developed for the two gearboxes and utilized to derive the connection weights representing the influence of component faults on vibration features. The validity of the structural influences is evaluated by comparing them with those obtained from experimental RMS values. These influences are also evaluated ba comparing them with the weights of a connectionist network trained though supervised learning. The results indicate general agreement between the modeled and experimentally obtained influences. The vibration data from the two gearboxes are also used to evaluate the performance of SBCN in fault diagnosis. The diagnostic results indicate that the SBCN is effective in directing the presence of faults and isolating them within gearbox subsystems based on structural influences, but its performance is not as good in isolating faulty components, mainly due to lack of appropriate vibration features.

  13. Role of Geomechanics in Assessing the Feasibility of CO2 Sequestration in Depleted Hydrocarbon Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Fang, Zhi; Khaksar, Abbas

    2013-05-01

    Carbon dioxide (CO2) sequestration in depleted sandstone hydrocarbon reservoirs could be complicated by a number of geomechanical problems associated with well drilling, completions, and CO2 injection. The initial production of hydrocarbons (gas or oil) and the resulting pressure depletion as well as associated reduction in horizontal stresses (e.g., fracture gradient) narrow the operational drilling mud weight window, which could exacerbate wellbore instabilities while infill drilling. Well completions (casing, liners, etc.) may experience solids flowback to the injector wells when injection is interrupted due to CO2 supply or during required system maintenance. CO2 injection alters the pressure and temperature in the near wellbore region, which could cause fault reactivation or thermal fracturing. In addition, the injection pressure may exceed the maximum sustainable storage pressure, and cause fracturing and fault reactivation within the reservoirs or bounding formations. A systematic approach has been developed for geomechanical assessments for CO2 storage in depleted reservoirs. The approach requires a robust field geomechanical model with its components derived from drilling and production data as well as from wireline logs of historical wells. This approach is described in detail in this paper together with a recent study on a depleted gas field in the North Sea considered for CO2 sequestration. The particular case study shows that there is a limitation on maximum allowable well inclinations, 45° if aligning with the maximum horizontal stress direction and 65° if aligning with the minimum horizontal stress direction, beyond which wellbore failure would become critical while drilling. Evaluation of sanding risks indicates no sand control installations would be needed for injector wells. Fracturing and faulting assessments confirm that the fracturing pressure of caprock is significantly higher than the planned CO2 injection and storage pressures for an ideal case, in which the total field horizontal stresses increase with the reservoir re-pressurization in a manner opposite to their reduction with the reservoir depletion. However, as the most pessimistic case of assuming the total horizontal stresses staying the same over the CO2 injection, faulting could be reactivated on a fault with the least favorable geometry once the reservoir pressure reaches approximately 7.7 MPa. In addition, the initial CO2 injection could lead to a high risk that a fault with a cohesion of less than 5.1 MPa could be activated due to the significant effect of reduced temperature on the field stresses around the injection site.

  14. DEPEND: A simulation-based environment for system level dependability analysis

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar; Iyer, Ravishankar K.

    1992-01-01

    The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.

  15. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  16. The determination of the stacking fault energy in copper-nickel alloys

    NASA Technical Reports Server (NTRS)

    Leighly, H. P., Jr.

    1982-01-01

    Methods for determining the stacking fault energies of a series of nickel-copper alloys to gain an insight into the embrittling effect of hydrogen are evaluated. Plans for employing weak beam dark field electron microscopy to determine stacking fault energies are outlined.

  17. Preservation of amorphous ultrafine material: A proposed proxy for slip during recent earthquakes on active faults

    PubMed Central

    Hirono, Tetsuro; Asayama, Satoru; Kaneki, Shunya; Ito, Akihiro

    2016-01-01

    The criteria for designating an “Active Fault” not only are important for understanding regional tectonics, but also are a paramount issue for assessing the earthquake risk of faults that are near important structures such as nuclear power plants. Here we propose a proxy, based on the preservation of amorphous ultrafine particles, to assess fault activity within the last millennium. X-ray diffraction data and electron microscope observations of samples from an active fault demonstrated the preservation of large amounts of amorphous ultrafine particles in two slip zones that last ruptured in 1596 and 1999, respectively. A chemical kinetic evaluation of the dissolution process indicated that such particles could survive for centuries, which is consistent with the observations. Thus, preservation of amorphous ultrafine particles in a fault may be valuable for assessing the fault’s latest activity, aiding efforts to evaluate faults that may damage critical facilities in tectonically active zones. PMID:27827413

  18. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  19. Implementing a real time reasoning system for robust diagnosis

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Morris, William; Robertson, Charlie

    1993-01-01

    The objective of the Thermal Control System Automation Project (TCSAP) is to develop an advanced fault detection, isolation, and recovery (FDIR) capability for use on the Space Station Freedom (SSF) External Active Thermal Control System (EATCS). Real-time monitoring, control, and diagnosis of the EATCS will be performed with a knowledge based system (KBS). Implementation issues for the current version of the KBS are discussed.

  20. Hardening digital systems with distributed functionality: robust networks

    NASA Astrophysics Data System (ADS)

    Vaskova, Anna; Portela-Garcia, Marta; Garcia-Valderas, Mario; López-Ongil, Celia; Portilla, Jorge; Valverde, Juan; de la Torre, Eduardo; Riesgo, Teresa

    2013-05-01

    Collaborative hardening and hardware redundancy are nowadays the most interesting solutions in terms of fault tolerance achieved and low extra cost imposed to the project budget. Thanks to the powerful and cheap digital devices that are available in the market, extra processing capabilities can be used for redundant tasks, not only in early data processing (sensed data) but also in routing and interfacing1

  1. An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication

    NASA Astrophysics Data System (ADS)

    Li, Deng; Liu, Hui; Vasilakos, Athanasios

    The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.

  2. Evaluation of the Location and Recency of Faulting Near Prospective Surface Facilities in Midway Valley, Nye County, Nevada

    USGS Publications Warehouse

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2001-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern zone of fractures is within Quaternary alluvial sediments, but no bedrock was encountered in trenches and soil pits in this part of the prospective surface facilities site; thus, the direct association of this zone with one or more bedrock faults is uncertain. No displacement of lithologic contacts and soil horizons could be detected in the fractured Quaternary deposits. The results of these investigations imply the absence of any appreciable late Quaternary faulting activity at the prospective surface-facilities site.

  3. Multi-Scale Structure and Earthquake Properties in the San Jacinto Fault Zone Area

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Y.

    2014-12-01

    I review multi-scale multi-signal seismological results on structure and earthquake properties within and around the San Jacinto Fault Zone (SJFZ) in southern California. The results are based on data of the southern California and ANZA networks covering scales from a few km to over 100 km, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a dense rectangular array with >1100 vertical-component nodes separated by 10-30 m centered on the fault. The structural studies utilize earthquake data to image the seismogenic sections and ambient noise to image the shallower structures. The earthquake studies use waveform inversions and additional time domain and spectral methods. We observe pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios around the fault, and clear velocity contrasts across various sections. The damage zones and velocity contrasts produce fault zone trapped and head waves at various locations, along with time delays, anisotropy and other signals. The damage zones follow a flower-shape with depth; in places with velocity contrast they are offset to the stiffer side at depth as expected for bimaterial ruptures with persistent propagation direction. Analysis of PGV and PGA indicates clear persistent directivity at given fault sections and overall motion amplification within several km around the fault. Clear temporal changes of velocities, probably involving primarily the shallow material, are observed in response to seasonal, earthquake and other loadings. Full source tensor properties of M>4 earthquakes in the complex trifurcation area include statistically-robust small isotropic component, likely reflecting dynamic generation of rock damage in the source volumes. The dense fault zone instruments record seismic "noise" at frequencies >200 Hz that can be used for imaging and monitoring the shallow material with high space and time details, and numerous minute local earthquakes that contribute to the high frequency "noise". Updated results will be presented in the meeting. *The studies have been done in collaboration with Frank Vernon, Amir Allam, Dimitri Zigone, Zach Ross, Gregor Hillers, Ittai Kurzon, Michel Campillo, Philippe Roux, Lupei Zhu, Dan Hollis, Mitchell Barklage and others.

  4. Investigation of fault modes in permanent magnet synchronous machines for traction applications

    NASA Astrophysics Data System (ADS)

    Choi, Gilsu

    Over the past few decades, electric motor drives have been more widely adopted to power the transportation sector to reduce our dependence on foreign oil and carbon emissions. Permanent magnet synchronous machines (PMSMs) are popular in many applications in the aerospace and automotive industries that require high power density and high efficiency. However, the presence of magnets that cannot be turned off in the event of a fault has always been an issue that hinders adoption of PMSMs in these demanding applications. This work investigates the design and analysis of PMSMs for automotive traction applications with particular emphasis on fault-mode operation caused by faults appearing at the terminals of the machine. New models and analytical techniques are introduced for evaluating the steady-state and dynamic response of PMSM drives to various fault conditions. Attention is focused on modeling the PMSM drive including nonlinear magnetic behavior under several different fault conditions, evaluating the risks of irreversible demagnetization caused by the large fault currents, as well as developing fault mitigation techniques in terms of both the fault currents and demagnetization risks. Of the major classes of machine terminal faults that can occur in PMSMs, short-circuit (SC) faults produce much more dangerous fault currents than open-circuit faults. The impact of different PMSM topologies and parameters on their responses to symmetrical and asymmetrical short-circuit (SSC & ASC) faults has been investigated. A detailed investigation on both the SSC and ASC faults is presented including both closed-form and numerical analysis. The demagnetization characteristics caused by high fault-mode stator currents (i.e., armature reaction) for different types of PMSMs are investigated. A thorough analysis and comparison of the relative demagnetization vulnerability for different types of PMSMs is presented. This analysis includes design guidelines and recommendations for minimizing the demagnetization risks while examining corresponding trade-offs. Two PM machines have been tested to validate the predicted fault currents and braking torque as well as demagnetization risks in PMSM drives. The generality and scalability of key results have also been demonstrated by analyzing several PM machines with a variety of stator, rotor, and winding configurations for various power ratings.

  5. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis

    PubMed Central

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-01-01

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193

  6. Tremor-tide correlations and near-lithostatic pore pressure on the deep San Andreas fault.

    PubMed

    Thomas, Amanda M; Nadeau, Robert M; Bürgmann, Roland

    2009-12-24

    Since its initial discovery nearly a decade ago, non-volcanic tremor has provided information about a region of the Earth that was previously thought incapable of generating seismic radiation. A thorough explanation of the geologic process responsible for tremor generation has, however, yet to be determined. Owing to their location at the plate interface, temporal correlation with geodetically measured slow-slip events and dominant shear wave energy, tremor observations in southwest Japan have been interpreted as a superposition of many low-frequency earthquakes that represent slip on a fault surface. Fluids may also be fundamental to the failure process in subduction zone environments, as teleseismic and tidal modulation of tremor in Cascadia and Japan and high Poisson ratios in both source regions are indicative of pressurized pore fluids. Here we identify a robust correlation between extremely small, tidally induced shear stress parallel to the San Andreas fault and non-volcanic tremor activity near Parkfield, California. We suggest that this tremor represents shear failure on a critically stressed fault in the presence of near-lithostatic pore pressure. There are a number of similarities between tremor in subduction zone environments, such as Cascadia and Japan, and tremor on the deep San Andreas transform, suggesting that the results presented here may also be applicable in other tectonic settings.

  7. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  8. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less

  9. Framework for a space shuttle main engine health monitoring system

    NASA Technical Reports Server (NTRS)

    Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey

    1990-01-01

    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.

  10. Aircraft applications of fault detection and isolation techniques

    NASA Astrophysics Data System (ADS)

    Marcos Esteban, Andres

    In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.

  11. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  12. Strain Accumulation and Release of the Gorkha, Nepal, Earthquake (M w 7.8, 25 April 2015)

    NASA Astrophysics Data System (ADS)

    Morsut, Federico; Pivetta, Tommaso; Braitenberg, Carla; Poretti, Giorgio

    2017-08-01

    The near-fault GNSS records of strong-ground movement are the most sensitive for defining the fault rupture. Here, two unpublished GNSS records are studied, a near-fault-strong-motion station (NAGA) and a distant station in a poorly covered area (PYRA). The station NAGA, located above the Gorkha fault, sensed a southward displacement of almost 1.7 m. The PYRA station that is positioned at a distance of about 150 km from the fault, near the Pyramid station in the Everest, showed static displacements in the order of some millimeters. The observed displacements were compared with the calculated displacements of a finite fault model in an elastic halfspace. We evaluated two slips on fault models derived from seismological and geodetic studies: the comparison of the observed and modelled fields reveals that our displacements are in better accordance with the geodetic derived fault model than the seismologic one. Finally, we evaluate the yearly strain rate of four GNSS stations in the area that were recording continuously the deformation field for at least 5 years. The strain rate is then compared with the strain released by the Gorkha earthquake, leading to an interval of 235 years to store a comparable amount of elastic energy. The three near-fault GNSS stations require a slightly wider fault than published, in the case of an equivalent homogeneous rupture, with an average uniform slip of 3.5 m occurring on an area of 150 km × 60 km.

  13. The European FP7 ULTimateCO2 project: A comprehensive approach to study the long term fate of CO2 geological storage sites

    NASA Astrophysics Data System (ADS)

    Audigane, P.; Brown, S.; Dimier, A.; Pearce, J.; Frykman, P.; Maurand, N.; Le Gallo, Y.; Spiers, C. J.; Cremer, H.; Rutters, H.; Yalamas, T.

    2013-12-01

    The European FP7 ULTimateCO2 project aims at significantly advance our knowledge of specific processes that could influence the long-term fate of geologically stored CO2: i) trapping mechanisms, ii) fluid-rock interactions and effects on mechanical integrity of fractured caprock and faulted systems and iii) leakage due to mechanical and chemical damage in the well vicinity, iv) brine displacement and fluid mixing at regional scale. A realistic framework is ensured through collaboration with two demonstration sites in deep saline sandstone formations: the onshore former NER300 West Lorraine candidate in France (ArcelorMittal GeoLorraine) and the offshore EEPR Don Valley (former Hatfield) site in UK operated by National Grid. Static earth models have been generated at reservoir and basin scale to evaluate both trapping mechanisms and fluid displacement at short (injection) and long (post injection) time scales. Geochemical trapping and reservoir behaviour is addressed through experimental approaches using sandstone core materials in batch reactive mode with CO2 and impurities at reservoir pressure and temperature conditions and through geochemical simulations. Collection of data has been generated from natural and industrial (oil industry) analogues on the fluid flow and mechanical properties, structure, and mineralogy of faults and fractures that could affect the long-term storage capacity of underground CO2 storage sites. Three inter-related lines of laboratory experiments investigate the long-term evolution of the mechanical properties and sealing integrity of fractured and faulted caprocks using Opalinus clay of Mont Terri Gallery (Switzerland) (OPA), an analogue for caprock well investigated in the past for nuclear waste disposal purpose: - Characterization of elastic parameters in intact samples by measuring strain during an axial experiment, - A recording of hydraulic fracture flow properties by loading and shearing samples in order to create a 'realistic' fracture, followed by a gas injection in the fracture plan, - An assessment of temperature influences on carbonate and water content which affect carbonate bearing fault gouge using shear experiments at 20C and 120C on simulated fault gouges prepared by crushed OPA samples. To evaluate the interactions between CO2 (and formation fluids) and the well environment (formation, cement, casing) and to assess the consequences of these interactions on the transport properties of well materials, a 1:1 scale experiment has been set in the OPA to reproduce classical well objects (cemented annulus, casing and cement plug) perforating caprock formations (OPA). Innovative probabilistic modelling tools are also under development in order to build robust calibration methods for uncertainty management of the simulated long term scenarios.

  14. Design study of Software-Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Goldberg, J.; Green, M. W.; Kutz, W. H.; Levitt, K. N.; Mills, M. E.; Shostak, R. E.; Whiting-Okeefe, P. M.; Zeidler, H. M.

    1982-01-01

    Software-implemented fault tolerant (SIFT) computer design for commercial aviation is reported. A SIFT design concept is addressed. Alternate strategies for physical implementation are considered. Hardware and software design correctness is addressed. System modeling and effectiveness evaluation are considered from a fault-tolerant point of view.

  15. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  16. Development of Monitoring and Diagnostic Methods for Robots Used In Remediation of Waste Sites - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M.

    2000-04-01

    This project is the first evaluation of model-based diagnostics to hydraulic robot systems. A greater understanding of fault detection for hydraulic robots has been gained, and a new theoretical fault detection model developed and evaluated.

  17. Slip distribution, strain accumulation and aseismic slip on the Chaman Fault system

    NASA Astrophysics Data System (ADS)

    Amelug, F.

    2015-12-01

    The Chaman fault system is a transcurrent fault system developed due to the oblique convergence of the India and Eurasia plates in the western boundary of the India plate. To evaluate the contemporary rates of strain accumulation along and across the Chaman Fault system, we use 2003-2011 Envisat SAR imagery and InSAR time-series methods to obtain a ground velocity field in radar line-of-sight (LOS) direction. We correct the InSAR data for different sources of systematic biases including the phase unwrapping errors, local oscillator drift, topographic residuals and stratified tropospheric delay and evaluate the uncertainty due to the residual delay using time-series of MODIS observations of precipitable water vapor. The InSAR velocity field and modeling demonstrates the distribution of deformation across the Chaman fault system. In the central Chaman fault system, the InSAR velocity shows clear strain localization on the Chaman and Ghazaband faults and modeling suggests a total slip rate of ~24 mm/yr distributed on the two faults with rates of 8 and 16 mm/yr, respectively corresponding to the 80% of the total ~3 cm/yr plate motion between India and Eurasia at these latitudes and consistent with the kinematic models which have predicted a slip rate of ~17-24 mm/yr for the Chaman Fault. In the northern Chaman fault system (north of 30.5N), ~6 mm/yr of the relative plate motion is accommodated across Chaman fault. North of 30.5 N where the topographic expression of the Ghazaband fault vanishes, its slip does not transfer to the Chaman fault but rather distributes among different faults in the Kirthar range and Sulaiman lobe. Observed surface creep on the southern Chaman fault between Nushki and north of City of Chaman, indicates that the fault is partially locked, consistent with the recorded M<7 earthquakes in last century on this segment. The Chaman fault between north of the City of Chaman to North of Kabul, does not show an increase in the rate of strain accumulation. However, lack of seismicity on this segment, presents a significant hazard on Kabul. The high rate of strain accumulation on the Ghazaband fault and lack of evidence for the rupture of the fault during the 1935 Quetta earthquake, present a growing earthquake hazard to the Balochistan and the populated areas such as the city of Quetta.

  18. Probabilistic Risk Assessment of Hydraulic Fracturing in Unconventional Reservoirs by Means of Fault Tree Analysis: An Initial Discussion

    NASA Astrophysics Data System (ADS)

    Rodak, C. M.; McHugh, R.; Wei, X.

    2016-12-01

    The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.

  19. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  20. The 2011 Mw 7.1 Van (Eastern Turkey) earthquake

    USGS Publications Warehouse

    Elliot, John R.; Copley, Alex C.; Holley, R.; Scharer, Katherine M.; Parsons, Barry

    2013-01-01

    We use interferometric synthetic aperture radar (InSAR), body wave seismology, satellite imagery, and field observations to constrain the fault parameters of the Mw 7.1 2011 Van (Eastern Turkey) reverse-slip earthquake, in the Turkish-Iranian plateau. Distributed slip models from elastic dislocation modeling of the InSAR surface displacements from ENVISAT and COSMO-SkyMed interferograms indicate up to 9 m of reverse and oblique slip on a pair of en echelon NW 40 °–54 ° dipping fault planes which have surface extensions projecting to just 10 km north of the city of Van. The slip remained buried and is relatively deep, with a centroid depth of 14 km, and the rupture reaching only within 8–9 km of the surface, consistent with the lack of significant ground rupture. The up-dip extension of this modeled WSW striking fault plane coincides with field observations of weak ground deformation seen on the western of the two fault segments and has a dip consistent with that seen at the surface in fault gouge exposed in Quaternary sediments. No significant coseismic slip is found in the upper 8 km of the crust above the main slip patches, except for a small region on the eastern segment potentially resulting from the Mw 5.9 aftershock on the same day. We perform extensive resolution tests on the data to confirm the robustness of the observed slip deficit in the shallow crust. We resolve a steep gradient in displacement at the point where the planes of the two fault segments ends are inferred to abut at depth, possibly exerting some structural control on rupture extent.

  1. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace.

    PubMed

    Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao

    2016-11-25

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.

  2. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace

    PubMed Central

    Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao

    2016-01-01

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234

  3. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    PubMed

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  4. Evaluation of potential surface rupture and review of current seismic hazards program at the Los Alamos National Laboratory. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-12-09

    This report summarizes the authors review and evaluation of the existing seismic hazards program at Los Alamos National Laboratory (LANL). The report recommends that the original program be augmented with a probabilistic analysis of seismic hazards involving assignment of weighted probabilities of occurrence to all potential sources. This approach yields a more realistic evaluation of the likelihood of large earthquake occurrence particularly in regions where seismic sources may have recurrent intervals of several thousand years or more. The report reviews the locations and geomorphic expressions of identified fault lines along with the known displacements of these faults and last knowmore » occurrence of seismic activity. Faults are mapped and categorized into by their potential for actual movement. Based on geologic site characterization, recommendations are made for increased seismic monitoring; age-dating studies of faults and geomorphic features; increased use of remote sensing and aerial photography for surface mapping of faults; the development of a landslide susceptibility map; and to develop seismic design standards for all existing and proposed facilities at LANL.« less

  5. Probabilistic seismic hazard analyses for ground motions and fault displacement at Yucca Mountain, Nevada

    USGS Publications Warehouse

    Stepp, J.C.; Wong, I.; Whitney, J.; Quittmeyer, R.; Abrahamson, N.; Toro, G.; Young, S.R.; Coppersmith, K.; Savy, J.; Sullivan, T.

    2001-01-01

    Probabilistic seismic hazard analyses were conducted to estimate both ground motion and fault displacement hazards at the potential geologic repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. The study is believed to be the largest and most comprehensive analyses ever conducted for ground-shaking hazard and is a first-of-a-kind assessment of probabilistic fault displacement hazard. The major emphasis of the study was on the quantification of epistemic uncertainty. Six teams of three experts performed seismic source and fault displacement evaluations, and seven individual experts provided ground motion evaluations. State-of-the-practice expert elicitation processes involving structured workshops, consensus identification of parameters and issues to be evaluated, common sharing of data and information, and open exchanges about the basis for preliminary interpretations were implemented. Ground-shaking hazard was computed for a hypothetical rock outcrop at -300 m, the depth of the potential waste emplacement drifts, at the designated design annual exceedance probabilities of 10-3 and 10-4. The fault displacement hazard was calculated at the design annual exceedance probabilities of 10-4 and 10-5.

  6. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  7. Robust real-time fault tracking for the 2011 Mw 9.0 Tohoku earthquake based on the phased-array-interference principle

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Parolai, Stefano; Zschau, Jochen

    2013-04-01

    Based on the principle of the phased array interference, we have developed an Iterative Deconvolution Stacking (IDS) method for real-time kinematic source inversion using near-field strong-motion and GPS networks. In this method, the seismic and GPS stations work like an array radar. The whole potential fault area is scanned patch by patch by stacking the apparent source time functions, which are obtained through deconvolution between the recorded seismograms and synthetic Green's functions. Once some significant source signals are detected any when and where, their signatures are removed from the observed seismograms. The procedure is repeated until the accumulative seismic moment being found converges and the residual seismograms are reduced below the noise level. The new approach does not need any artificial constraint used in the source parameterization such as, for example, fixing the hypocentre, restricting the rupture velocity and rise time, etc. Thus, it can be used for automatic real-time source inversion. In the application to the 2011 Tohoku earthquake, the IDS method is proved to be robust and reliable on the fast estimation of moment magnitude, fault area, rupture direction, and maximum slip, etc. About at 100 s after the rupture initiation, we can get the information that the rupture mainly propagates along the up-dip direction and causes a maximum slip of 17 m, which is enough to release a tsunami early warning. About two minutes after the earthquake occurrence, the maximum slip is found to be 31 m, and the moment magnitude reaches Mw8.9 which is very close to the final moment magnitude (Mw9.0) of this earthquake.

  8. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  9. Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management

    PubMed Central

    Di Benedetto, Alessandro; Fiani, Margherita

    2017-01-01

    The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too. PMID:29278386

  10. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    NASA Astrophysics Data System (ADS)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  11. Strength reduction factors for seismic analyses of buildings exposed to near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Qu, Honglue; Zhang, Jianjing; Zhao, J. X.

    2011-06-01

    To estimate the near-fault inelastic response spectra, the accuracy of six existing strength reduction factors ( R) proposed by different investigators were evaluated by using a suite of near-fault earthquake records with directivity-induced pulses. In the evaluation, the force-deformation relationship is modelled by elastic-perfectly plastic, bilinear and stiffness degrading models, and two site conditions, rock and soil, are considered. The R-value ratio (ratio of the R value obtained from the existing R-expressions (or the R-µ- T relationships) to that from inelastic analyses) is used as a measurement parameter. Results show that the R-expressions proposed by Ordaz & Perez-Rocha are the most suitable for near-fault ground motions, followed by the Newmark & Hall and the Berrill et al. relationships. Based on an analysis using the near-fault ground motion dataset, new expressions of R that consider the effects of site conditions are presented and verified.

  12. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Advanced model-based FDIR techniques for aerospace systems: Today challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Zolghadri, Ali

    2012-08-01

    This paper discusses some trends and recent advances in model-based Fault Detection, Isolation and Recovery (FDIR) for aerospace systems. The FDIR challenges range from pre-design and design stages for upcoming and new programs, to improvement of the performance of in-service flying systems. For space missions, optimization of flight conditions and safe operation is intrinsically related to GNC (Guidance, Navigation & Control) system of the spacecraft and includes sensors and actuators monitoring. Many future space missions will require autonomous proximity operations including fault diagnosis and the subsequent control and guidance recovery actions. For upcoming and future aircraft, one of the main issues is how early and robust diagnosis of some small and subtle faults could contribute to the overall optimization of aircraft design. This issue would be an important factor for anticipating the more and more stringent requirements which would come in force for future environmentally-friendlier programs. The paper underlines the reasons for a widening gap between the advanced scientific FDIR methods being developed by the academic community and technological solutions demanded by the aerospace industry.

  14. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solutionmore » can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.« less

  15. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  16. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  17. Spatial correlation analysis of cascading failures: Congestions and Blackouts

    PubMed Central

    Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo

    2014-01-01

    Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927

  18. Shallow subsurface imaging of the Piano di Pezza active normal fault (central Italy) by high-resolution refraction and electrical resistivity tomography coupled with time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Villani, Fabio; Tulliani, Valerio; Fierro, Elisa; Sapia, Vincenzo; Civico, Riccardo

    2015-04-01

    The Piano di Pezza fault is the north-westernmost segment of the >20 km long Ovindoli-Pezza active normal fault-system (central Italy). Although existing paleoseismic data document high vertical Holocene slip rates (~1 mm/yr) and a remarkable seismogenic potential of this fault, its subsurface setting and Pleistocene cumulative displacement are still poorly known. We investigated for the first time by means of high-resolution seismic and electrical resistivity tomography coupled with time domain electromagnetic (TDEM) measurements the shallow subsurface of a key section of the Piano di Pezza fault. Our surveys cross a ~5 m-high fault scarp that was generated by repeated surface-rupturing earthquakes displacing some Late Holocene alluvial fans. We provide 2-D Vp and resistivity images which clearly show significant details of the fault structure and the geometry of the shallow basin infill material down to 50 m depth. We can estimate the dip (~50°) and the Holocene vertical displacement of the master fault (~10 m). We also recognize in the hangingwall some low-velocity/low-resistivity regions that we relate to packages of colluvial wedges derived from scarp degradation, which may represent the record of several paleo-earthquakes older than the Late Holocene events previously recognized by paleoseismic trenching. Conversely, due to the limited investigation depth of seismic and electrical tomography, the estimation of the cumulative amount of Pleistocene throw is hampered. Therefore, to increase the depth of investigation, we performed 7 TDEM measurements along the electrical profile using a 50 m loop size both in central and offset configuration. The recovered 1-D resistivity models show a good match with 2-D resistivity images in the near surface. Moreover, TDEM inversion results indicate that in the hangingwall, ~200 m away from the surface fault trace, the carbonate pre-Quaternary basement may be found at ~90-100 m depth. The combined approach of electrical and seismic data coupled with TDEM measurements provides a robust constraint to the Piano di Pezza fault cumulative offset. Our data are useful for better reconstructing the deep structural setting of the Piano di Pezza basin and assessing the role played by extensional tectonics in its Quaternary evolution.

  19. The Terminology of Fault Zones in the Brittle Regime: Making Field Observations More Useful to the End User

    NASA Astrophysics Data System (ADS)

    Shipton, Z.; Caine, J. S.; Lunn, R. J.

    2013-12-01

    Geologists are tiny creatures living on the 2-and-a-bit-D surface of a sphere who observe essentially 1D vanishingly small portions (boreholes, roadcuts, stream and beach sections) of complex, 4D tectonic-scale structures. Field observations of fault zones are essential to understand the processes of fault growth and to make predictions of fault zone mechanical and hydraulic properties at depth. Here, we argue that a failure of geologists to communicate their knowledge effectively to other scientists/engineers can lead to unrealistic assumptions being made about fault properties, and may result in poor economic performance and a lack of robustness in industrial safety cases. Fault zones are composed of many heterogeneously distributed deformation-related elements. Low permeability features include regions of intense grain-size reduction, pressure solution, cementation and shale smears. Other elements are likely to have enhanced permeability through fractures and breccias. Slip surfaces can have either enhanced or reduced permeability depending on whether they are open or closed, and the local stress state. The highly variable nature of 1) the architecture of faults and 2) the properties of deformation-related elements demonstrates that there are many factors controlling the evolution of fault zone internal structures (fault architecture). The aim of many field studies of faults is to provide data to constrain predictions at depth. For these data to be useful, pooling of data from multiple sites is usually necessary. This effort is frequently hampered by variability in the usage of fault terminologies. In addition, these terms are often used in ways as to make it easy for 'end-users' such as petroleum reservoir engineers, mining geologists, and seismologists to mis-interpret or over-simplify the implications of field studies. Field geologists are comfortable knowing that if you walk along strike or up dip of a fault zone you will find variations in fault rock type, number and orientations of slip surfaces, variation in fracture density, relays, asperities, variable juxtaposition relationships etc. Problems can arise when "users" of structural geology try to apply models to general cases without understanding that these are simplified models. For example, when a section like the one in Chester and Logan 1996, gets projected infinitely into the third dimension along a fault the size of the San Andreas (seismology), or Shale Gouge Ratios are blindly applied to an Allen diagram without recognising that sub-seismic scale relays may provide "hidden" juxtapositions resulting in fluids bypassing low permeability fault cores. Phrases like 'low-permeability fault core and high-permeabilty damage zone' fail to appreciate fault zone complexity. Internicene arguments over the details of terminology that baffle the "end users" can make detailed field studies that characterise fault heterogeneity seem irrelevant. We argue that the field geology community needs to consider ways to make sure that we educate end-users to appropriate and cautious approaches to use of the data we provide with an appreciation of the uncertainties inherent in our limited ability to characterize 4D, tectonic structures, at the same time as understanding the value of carefully collected field data.

  20. Fault Slip and GPS Velocities Across the Shan Plateau Define a Curved Southwestward Crustal Motion Around the Eastern Himalayan Syntaxis

    NASA Astrophysics Data System (ADS)

    Shi, Xuhua; Wang, Yu; Sieh, Kerry; Weldon, Ray; Feng, Lujia; Chan, Chung-Han; Liu-Zeng, Jing

    2018-03-01

    Characterizing the 700 km wide system of active faults on the Shan Plateau, southeast of the eastern Himalayan syntaxis, is critical to understanding the geodynamics and seismic hazard of the large region that straddles neighboring China, Myanmar, Thailand, Laos, and Vietnam. Here we evaluate the fault styles and slip rates over multi-timescales, reanalyze previously published short-term Global Positioning System (GPS) velocities, and evaluate slip-rate gradients to interpret the regional kinematics and geodynamics that drive the crustal motion. Relative to the Sunda plate, GPS velocities across the Shan Plateau define a broad arcuate tongue-like crustal motion with a progressively northwestward increase in sinistral shear over a distance of 700 km followed by a decrease over the final 100 km to the syntaxis. The cumulative GPS slip rate across the entire sinistral-slip fault system on the Shan Plateau is 12 mm/year. Our observations of the fault geometry, slip rates, and arcuate southwesterly directed tongue-like patterns of GPS velocities across the region suggest that the fault kinematics is characterized by a regional southwestward distributed shear across the Shan Plateau, compared to more block-like rotation and indentation north of the Red River fault. The fault geometry, kinematics, and regional GPS velocities are difficult to reconcile with regional bookshelf faulting between the Red River and Sagaing faults or localized lower crustal channel flows beneath this region. The crustal motion and fault kinematics can be driven by a combination of basal traction of a clockwise, southwestward asthenospheric flow around the eastern Himalayan syntaxis and gravitation or shear-driven indentation from north of the Shan Plateau.

  1. Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.

    2010-12-01

    Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.

  2. Transpressional Rupture Cascade of the 2016 Mw 7.8 Kaikoura Earthquake, New Zealand

    NASA Astrophysics Data System (ADS)

    Xu, Wenbin; Feng, Guangcai; Meng, Lingsen; Zhang, Ailin; Ampuero, Jean Paul; Bürgmann, Roland; Fang, Lihua

    2018-03-01

    Large earthquakes often do not occur on a simple planar fault but involve rupture of multiple geometrically complex faults. The 2016 Mw 7.8 Kaikoura earthquake, New Zealand, involved the rupture of at least 21 faults, propagating from southwest to northeast for about 180 km. Here we combine space geodesy and seismology techniques to study subsurface fault geometry, slip distribution, and the kinematics of the rupture. Our finite-fault slip model indicates that the fault motion changes from predominantly right-lateral slip near the epicenter to transpressional slip in the northeast with a maximum coseismic surface displacement of about 10 m near the intersection between the Kekerengu and Papatea faults. Teleseismic back projection imaging shows that rupture speed was overall slow (1.4 km/s) but faster on individual fault segments (approximately 2 km/s) and that the conjugate, oblique-reverse, north striking faults released the largest high-frequency energy. We show that the linking Conway-Charwell faults aided in propagation of rupture across the step over from the Humps fault zone to the Hope fault. Fault slip cascaded along the Jordan Thrust, Kekerengu, and Needles faults, causing stress perturbations that activated two major conjugate faults, the Hundalee and Papatea faults. Our results shed important light on the study of earthquakes and seismic hazard evaluation in geometrically complex fault systems.

  3. Effects of lateral variations of crustal rheology on the occurrence of post-orogenic normal faults: The Alto Tiberina Fault (Northern Apennines, Central Italy)

    NASA Astrophysics Data System (ADS)

    Pauselli, Cristina; Ranalli, Giorgio

    2017-11-01

    The Northern Apennines (NA) are characterized by formerly compressive structures partly overprinted by subsequent extensional structures. The area of extensional tectonics migrated eastward since the Miocene. The youngest and easternmost major expression of extension is the Alto Tiberina Fault (ATF). We estimate 2D rheological profiles across the NA, and conclude that lateral rheological crustal variations have played an important role in the formation of the ATF and similar previously active faults to the west. Lithospheric delamination and mantle degassing resulted in an easterly-migrating extension-compression boundary, coinciding at present with the ATF, where (i) the thickness of the upper crust brittle layer reaches a maximum; (ii) the critical stress difference required to initiate faulting at the base of the brittle layer is at a minimum; and (iii) the total strengths of both the brittle layer and the whole lithosphere are at a minimum. Although the location of the fault is correlated with lithospheric rheological properties, the rheology by itself does not account for the low dip ( 20°) of the ATF. Two hypotheses are considered: (a) the low dip of the ATF is related to a rotation of the stress tensor at the time of initiation of the fault, caused by a basal shear stress ( 100 MPa) possibly related to corner flow associated with delamination; or (b) the low dip is associated to low values of the friction coefficient (≤ 0.5) coupled with high pore pressures related to mantle degassing. Our results establishing the correlation between crustal rheology and the location of the ATF are relatively robust, as we have examined various possible compositions and rheological parameters. They also provide possible general indications on the mechanisms of localized extension in post-orogenic extensional setting. The hypotheses to account for the low dip of the ATF, on the other hand, are intended simply to suggest possible solutions worthy of further study.

  4. A Robust and Resilient Network Design Paradigm for Region-Based Faults Inflicted by WMD Attack

    DTIC Science & Technology

    2016-04-01

    MEASUREMENTS FOR GRID MONITORING AND CONTROL AGAINST POSSIBLE WMD ATTACKS We investigated big data processing of PMU measurements for grid monitoring and...control against possible WMD attacks. Big data processing and analytics of synchrophasor measurements, collected from multiple locations of power grids...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources

  5. Designing application software in wide area network settings

    NASA Technical Reports Server (NTRS)

    Makpangou, Mesaac; Birman, Ken

    1990-01-01

    Progress in methodologies for developing robust local area network software has not been matched by similar results for wide area settings. The design of application software spanning multiple local area environments is examined. For important classes of applications, simple design techniques are presented that yield fault tolerant wide area programs. An implementation of these techniques as a set of tools for use within the ISIS system is described.

  6. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  7. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  8. Gravity and magnetic investigations of the Ghost Dance and Solitario Canyon faults, Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Langenheim, V.E.

    1995-12-31

    Ground magnetic and gravity data collected along traverses across the Ghost Dance and Solitario Canyon faults on the eastern and western flanks, respectively, of Yucca Mountain in southwest Nevada are interpreted. These data were collected as part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain. Gravity and magnetic data and models along traverses across the Ghost Dance and Solitario Canyon faults show prominent anomalies associated with known faults and reveal a number of possible concealed faults beneath the eastern flank of Yucca Mountain. The central part of the eastern flankmore » of Yucca Mountain is characterized by several small amplitude anomalies that probably reflect small scale faulting.« less

  9. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  10. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  11. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  12. Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2003-01-01

    In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.

  13. A fault isolation method based on the incidence matrix of an augmented system

    NASA Astrophysics Data System (ADS)

    Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong

    2018-03-01

    A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.

  14. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  15. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  16. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  17. Is the useful field of view a good predictor of at-fault crash risk in elderly Japanese drivers?

    PubMed

    Sakai, Hiroyuki; Uchiyama, Yuji; Takahara, Miwa; Doi, Shun'ichi; Kubota, Fumiko; Yoshimura, Takayoshi; Tachibana, Atsumichi; Kurahashi, Tetsuo

    2015-05-01

    Although age-related decline in the useful field of view (UFOV) is well recognized as a risk factor for at-fault crash involvement in elderly drivers, there is still room to study its applicability to elderly Japanese drivers. In the current study, we thus examined the relationship between UFOV and at-fault crash history in an elderly Japanese population. We also explored whether potential factors that create awareness of reduced driving fitness could be a trigger for the self-regulation of driving in elderly drivers. We measured UFOV and at-fault crash history from 151 community-dwelling Japanese aged 60 years or older, and compared UFOV of at-fault crash-free and crash-involved drivers. We also measured self-evaluated driving style using a questionnaire. UFOV in crash-involved drivers was significantly lower than that in crash-free drivers. No significant difference was found in self-evaluated driving style between crash-free and crash-involved drivers. In addition, there was no significant association between UFOV and self-evaluated driving style. The present study showed that UFOV is a good predictor of at-fault crash risk in elderly Japanese drivers. Furthermore, our data imply that it might be difficult for elderly drivers to adopt appropriate driving strategies commensurate with their current driving competence. © 2014 Japan Geriatrics Society.

  18. Investigation of possibility of surface rupture derived from PFDHA and calculation of surface displacement based on dislocation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Irikura, K.

    2013-12-01

    A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the logistic regression analysis to the calculated displacements, which were classified by the above threshold. The estimated probability curve indicated the similar trend to the result of Takao et al. (2013). The probability of revere faults is larger than that of strike slip faults. On the other hand, PFDHA results show different trends. The probability of reverse faults at higher magnitude is lower than that of strike slip and normal faults. Ross and Moss (2011) suggested that the sediment and/or rock over the fault compress and not reach the displacement to the surface enough. The numerical theory applied in this study cannot deal with a complex initial situation such as topography.

  19. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  20. Dynamic modeling of gearbox faults: A review

    NASA Astrophysics Data System (ADS)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  1. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.

    PubMed

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.

  3. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    PubMed Central

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  4. Distribution and nature of fault architecture in a layered sandstone and shale sequence: An example from the Moab fault, Utah

    USGS Publications Warehouse

    Davatzes, N.C.; Aydin, A.

    2005-01-01

    We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.

  5. Spatial Patterns of Geomorphic Surface Features and Fault Morphology Based on Diffusion Equation Modeling of the Kumroch Fault Kamchatka Peninsula, Russia

    NASA Astrophysics Data System (ADS)

    Heinlein, S. N.

    2013-12-01

    Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.

  6. Hayward Fault rate constraints at Berkeley: Evaluation of the 335-meter Strawberry Creek offset

    NASA Astrophysics Data System (ADS)

    Williams, P. L.

    2007-12-01

    At UC Berkeley the active channel of Strawberry Creek is offset 335 meters by the Hayward fault and two abandoned channels of Strawberry Creek are laterally offset 580 and 730 meters. These relationships record the displacement of the northern Hayward fault at Berkeley over a period of tens of millennia. The Strawberry Creek site has a similar geometry to the central San Andreas fault's Wallace Creek site, which arguably provides the best geological evidence of "millennial" fault kinematics in California (Sieh and Jahns, 1984). Slip rate determinations are an essential component of overall hazard evaluation for the Hayward fault, and this site is ripe to disclose a long-term form of this parameter, to contrast with geodetic and other geological rate evidence. Large offsets at the site may lower uncertainty in the rate equation relative to younger sites, as the affect of stream abandonment age, generally the greatest source of rate uncertainty, is greatly reduced. This is helpful here because it more-than-offsets uncertainties resulting from piercing projections to the fault. Strawberry Creek and its ancestral channels suggest west-side-up vertical deformation across the Hayward fault at this location. The development of the vertical deformation parameter will complement ongoing geodetic measurements, particularly InSAR, and motivate testing of other geological constraints. Up-to-the-west motion across the Hayward fault at Berkeley has important implications for the partitioning of strain and kinematics of the northern Hayward fault, and may explain anomalous up-on-the-west landforms elsewhere along the fault. For example, geological features of the western Berkeley Hills are consistent with rapid and recent uplift to the west of the fault. On the basis of a preliminary analysis of the offset channels of Strawberry Creek, up-to-the-west uplift is about 0.5mm/yr across the Hayward fault at Berkeley. If this is in fact the long-term rate, the 150 m height of the Hills to the northwest of the Strawberry Creek site was produced during the past about 300,000 years by a significant dip- slip (thrust) component of Hayward fault motion. Rapid and recent uplift of some portions of the East Bay Hills has important implications for fault geometries and slope stability, and should strongly influence the investigation fault hazards in areas that are more complexly deformed.

  7. The role of thin, mechanical discontinuities on the propagation of reverse faults: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2016-04-01

    Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation with respect to the faulting pattern and evolution. Secondly, using MOVE software, we extracted the positions of fault tips and folds every 5 mm of displacement on the master fault. Analyzing these positions in all of the experiments, we found that the growth rate of the faults and the related fold shape vary depending on the number of discontinuities in the clay medium. Other results can be summarized as follows: 1) the fault growth rate is not constant, but varies especially while the new faults interacts with precuts; 2) the new faults tend to crosscut the discontinuities when the angle between them is approximately 90°; 3) the trishear zone change its shape during the experiments especially when the main fault interacts with the discontinuities.

  8. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Identification of faulty sensor using relative partial decomposition via independent component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Quek, S. T.

    2015-07-01

    Performance of any structural health monitoring algorithm relies heavily on good measurement data. Hence, it is necessary to employ robust faulty sensor detection approaches to isolate sensors with abnormal behaviour and exclude the highly inaccurate data in the subsequent analysis. The independent component analysis (ICA) is implemented to detect the presence of sensors showing abnormal behaviour. A normalized form of the relative partial decomposition contribution (rPDC) is proposed to identify the faulty sensor. Both additive and multiplicative types of faults are addressed and the detectability illustrated using a numerical and an experimental example. An empirical method to establish control limits for detecting and identifying the type of fault is also proposed. The results show the effectiveness of the ICA and rPDC method in identifying faulty sensor assuming that baseline cases are available.

  10. Characterization of the faulted behavior of digital computers and fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Miner, Paul S.

    1989-01-01

    A development status evaluation is presented for efforts conducted at NASA-Langley since 1977, toward the characterization of the latent fault in digital fault-tolerant systems. Attention is given to the practical, high speed, generalized gate-level logic system simulator developed, as well as to the validation methodology used for the simulator, on the basis of faultable software and hardware simulations employing a prototype MIL-STD-1750A processor. After validation, latency tests will be performed.

  11. Structural Controls of the Emerson Pass Geothermal System, Washoe County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B; Faulds, James E

    We have conducted a detailed geologic study to better characterize a blind geothermal system in Emerson Pass on the Pyramid Lake Paiute Tribe Reservation, western Nevada. A thermal anomaly was discovered in Emerson Pass by use of 2 m temperature surveys deployed within a structurally favorable setting and proximal to surface features indicative of geothermal activity. The anomaly lies at the western edge of a broad left step at the northeast end of Pyramid Lake between the north- to north-northeast-striking, west-dipping, Fox and Lake Range normal faults. The 2-m temperature surveys have defined a N-S elongate thermal anomaly that hasmore » a maximum recorded temperature of ~60°C and resides on a north- to north-northeaststriking fault. Travertine mounds, chalcedonic silica veins, and silica cemented Pleistocene lacustrine gravels in Emerson Pass indicate a robust geothermal system active at the surface in the recent past. Structural complexity and spatial heterogeneities of the strain and stress field have developed in the step-over region, but kinematic data suggest a WNW-trending (~280° azimuth) extension direction. The geothermal system is likely hosted in Emerson Pass as a result of enhanced permeability generated by the intersection of two oppositely dipping, southward terminating north- to north-northwest-striking (Fox Range fault) and northnortheast- striking faults.« less

  12. From Fault-Diagnosis and Performance Recovery of a Controlled System to Chaotic Secure Communication

    NASA Astrophysics Data System (ADS)

    Hsu, Wen-Teng; Tsai, Jason Sheng-Hong; Guo, Fang-Cheng; Guo, Shu-Mei; Shieh, Leang-San

    Chaotic systems are often applied to encryption on secure communication, but they may not provide high-degree security. In order to improve the security of communication, chaotic systems may need to add other secure signals, but this may cause the system to diverge. In this paper, we redesign a communication scheme that could create secure communication with additional secure signals, and the proposed scheme could keep system convergence. First, we introduce the universal state-space adaptive observer-based fault diagnosis/estimator and the high-performance tracker for the sampled-data linear time-varying system with unanticipated decay factors in actuators/system states. Besides, robustness, convergence in the mean, and tracking ability are given in this paper. A residual generation scheme and a mechanism for auto-tuning switched gain is also presented, so that the introduced methodology is applicable for the fault detection and diagnosis (FDD) for actuator and state faults to yield a high tracking performance recovery. The evolutionary programming-based adaptive observer is then applied to the problem of secure communication. Whenever the tracker induces a large control input which might not conform to the input constraint of some physical systems, the proposed modified linear quadratic optimal tracker (LQT) can effectively restrict the control input within the specified constraint interval, under the acceptable tracking performance. The effectiveness of the proposed design methodology is illustrated through tracking control simulation examples.

  13. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  14. Design and evaluation of a fault-tolerant multiprocessor using hardware recovery blocks

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1982-01-01

    A fault-tolerant multiprocessor with a rollback recovery mechanism is discussed. The rollback mechanism is based on the hardware recovery block which is a hardware equivalent to the software recovery block. The hardware recovery block is constructed by consecutive state-save operations and several state-save units in every processor and memory module. When a fault is detected, the multiprocessor reconfigures itself to replace the faulty component and then the process originally assigned to the faulty component retreats to one of the previously saved states in order to resume fault-free execution. A mathematical model is proposed to calculate both the coverage of multi-step rollback recovery and the risk of restart. A performance evaluation in terms of task execution time is also presented.

  15. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2015-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  16. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2014-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  17. Investigation of Air Transportation Technology at Princeton University, 1989-1990

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1990-01-01

    The Air Transportation Technology Program at Princeton University proceeded along six avenues during the past year: microburst hazards to aircraft; machine-intelligent, fault tolerant flight control; computer aided heuristics for piloted flight; stochastic robustness for flight control systems; neural networks for flight control; and computer aided control system design. These topics are briefly discussed, and an annotated bibliography of publications that appeared between January 1989 and June 1990 is given.

  18. A 10 cm Dual Frequency Doppler Weather Radar. Part I. The Radar System.

    DTIC Science & Technology

    1982-10-25

    Evaluation System ( RAMCES )". The step attenuator required for this calibration can be programmed remotely, has low power and temperature coefficients, and...Control and Evaluation System". The Quality Assurance/Fault Location Network makes use of fault location techniques at critical locations in the radar and...quasi-con- tinuous monitoring of radar performance. The Radar Monitor, Control and Evaluation System provides for automated system calibration and

  19. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  20. Postseismic deformation following the 2013 Mw 7.7 Balochistan (Pakistan) earthquake observed with Sentinel-1 Interferometry

    NASA Astrophysics Data System (ADS)

    Wang, K.; Fialko, Y. A.

    2017-12-01

    The Mw 7.7 Balochistan earthquake occurred on September 24th, 2013 in southwestern Pakistan. The earthquake rupture was characterized by mostly left-lateral strike slip, with a limited thrust component, on a system of curved, non-vertical (dip angle of 45-75 deg.) faults, including the Hoshab fault, and the Chaman fault at the North-East end of the rupture. We used Interferometric Synthetic Aperture Radar (InSAR) data from Sentinel-1 mission to derive the timeseries of postseismic displacements due to the 2013 Balochistan earthquake. Data from one ascending and two descending satellite tracks reveal robust post-seismic deformation during the observation period (October 2014 to April 2017). The postseismic InSAR observations are characterized by the line of sight (LOS) displacements primarily on the hanging wall side of the fault. The LOS displacements have different signs in data from the ascending and descending tracks (decreases and increases in the radar range, respectively), indicating that the postseismic deformation following the 2013 Balochistan earthquake was dominated by horizontal motion with the same sense as the coseismic motion. Kinematic inversions show that the observed InSAR LOS displacements are well explained by the left-lateral afterslip downdip of the high coseismic slip area. Contributions from the viscoelastic relaxation and poroelastic rebound seem to be negligible during the observation period. We also observe a sharp discontinuity in the postseismic displacement field on the North-East continuation of the 2013 rupture, along the Chaman fault. We verify that this discontinuity is not due to aftershocks, as the relative LOS velocities across this discontinuity show a gradually decelerating motion throughout the observation period. These observations are indicative of a creeping fault segment at the North-East end of the 2013 earthquake rupture that likely acted as a barrier to the rupture propagation. Analysis of Envisat data acquired prior to the 2013 event (2004-2010) confirms creep on the respective fault segment at a rate of 5-6 mm/yr. The creep rate has increased by more than an order of magnitude after the 2013 event. The inferred along-strike variations in the degree of fault locking may be analogous to those on the central section of the San Andreas fault in California.

  1. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  2. Development of Murray Loop Bridge for High Induced Voltage

    NASA Astrophysics Data System (ADS)

    Isono, Shigeki; Kawasaki, Katsutoshi; Kobayashi, Shin-Ichi; Ishihara, Hayato; Chiyajo, Kiyonobu

    In the case of the cable fault that ground fault resistance is less than 10MΩ, Murray Loop Bridge is excellent as a fault locator in location accuracy and the convenience. But, when the induction of several hundred V is taken from the single core cable which adjoins it, a fault location with the high voltage Murray Loop Bridge becomes difficult. Therefore, we developed Murray Loop Bridge, which could be applied even when the induced voltage of several hundred V occurs in the measurement cable. The evaluation of the fault location accuracy was done with the developed prototype by the actual line and the training equipment.

  3. Rupture Dynamics and Ground Motion from Earthquakes on Rough Faults in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bydlon, S. A.; Kozdon, J. E.; Duru, K.; Dunham, E. M.

    2013-12-01

    Heterogeneities in the material properties of Earth's crust scatter propagating seismic waves. The effects of scattered waves are reflected in the seismic coda and depend on the amplitude of the heterogeneities, spatial arrangement, and distance from source to receiver. In the vicinity of the fault, scattered waves influence the rupture process by introducing fluctuations in the stresses driving propagating ruptures. Further variability in the rupture process is introduced by naturally occurring geometric complexity of fault surfaces, and the stress changes that accompany slip on rough surfaces. Our goal is to better understand the origin of complexity in the earthquake source process, and to quantify the relative importance of source complexity and scattering along the propagation path in causing incoherence of high frequency ground motion. Using a 2D high order finite difference rupture dynamics code, we nucleate ruptures on either flat or rough faults that obey strongly rate-weakening friction laws. These faults are embedded in domains with spatially varying material properties characterized by Von Karman autocorrelation functions and their associated power spectral density functions, with variations in wave speed of approximately 5 to 10%. Flat fault simulations demonstrate that off-fault material heterogeneity, at least with this particular form and amplitude, has only a minor influence on the rupture process (i.e., fluctuations in slip and rupture velocity). In contrast, ruptures histories on rough faults in both homogeneous and heterogeneous media include much larger short-wavelength fluctuations in slip and rupture velocity. We therefore conclude that source complexity is dominantly influenced by fault geometric complexity. To examine contributions of scattering versus fault geometry on ground motions, we compute spatially averaged root-mean-square (RMS) acceleration values as a function of fault perpendicular distance for a homogeneous medium and several heterogeneous media characterized by different statistical properties. We find that at distances less than ~6 km from the fault, RMS acceleration values from simulations with homogeneous and heterogeneous media are similar, but at greater distances the RMS values associated with heterogeneous media are larger than those associated with homogeneous media. The magnitude of this divergence increases with the amplitude of the heterogeneities. For instance, for a heterogeneous medium with a 10% standard deviation in material property values relative to mean values, RMS accelerations are ~50% larger than for a homogeneous medium at distances greater than 6 km. This finding is attributed to the scattering of coherent pulses into multiple pulses of decreased amplitude that subsequently arrive at later times. In order to understand the robustness of these results, an extension of our dynamic rupture and wave propagation code to 3D is underway.

  4. Tsunami simulation using submarine displacement calculated from simulation of ground motion due to seismic source model

    NASA Astrophysics Data System (ADS)

    Akiyama, S.; Kawaji, K.; Fujihara, S.

    2013-12-01

    Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.

  5. Verifying Digital Components of Physical Systems: Experimental Evaluation of Test Quality

    NASA Astrophysics Data System (ADS)

    Laputenko, A. V.; López, J. E.; Yevtushenko, N. V.

    2018-03-01

    This paper continues the study of high quality test derivation for verifying digital components which are used in various physical systems; those are sensors, data transfer components, etc. We have used logic circuits b01-b010 of the package of ITC'99 benchmarks (Second Release) for experimental evaluation which as stated before, describe digital components of physical systems designed for various applications. Test sequences are derived for detecting the most known faults of the reference logic circuit using three different approaches to test derivation. Three widely used fault types such as stuck-at-faults, bridges, and faults which slightly modify the behavior of one gate are considered as possible faults of the reference behavior. The most interesting test sequences are short test sequences that can provide appropriate guarantees after testing, and thus, we experimentally study various approaches to the derivation of the so-called complete test suites which detect all fault types. In the first series of experiments, we compare two approaches for deriving complete test suites. In the first approach, a shortest test sequence is derived for testing each fault. In the second approach, a test sequence is pseudo-randomly generated by the use of an appropriate software for logic synthesis and verification (ABC system in our study) and thus, can be longer. However, after deleting sequences detecting the same set of faults, a test suite returned by the second approach is shorter. The latter underlines the fact that in many cases it is useless to spend `time and efforts' for deriving a shortest distinguishing sequence; it is better to use the test minimization afterwards. The performed experiments also show that the use of only randomly generated test sequences is not very efficient since such sequences do not detect all the faults of any type. After reaching the fault coverage around 70%, saturation is observed, and the fault coverage cannot be increased anymore. For deriving high quality short test suites, the approach that is the combination of randomly generated sequences together with sequences which are aimed to detect faults not detected by random tests, allows to reach the good fault coverage using shortest test sequences.

  6. Intraplate seismicity along the Gedi Fault in Kachchh rift basin of western India

    NASA Astrophysics Data System (ADS)

    Joshi, Vishwa; Rastogi, B. K.; Kumar, Santosh

    2017-11-01

    The Kachchh rift basin is located on the western continental margin of India and has a history of experiencing large to moderate intraplate earthquakes with M ≥ 5. During the past two centuries, two large earthquakes of Mw 7.8 (1819) and Mw 7.7 (2001) have occurred in the Kachchh region, the latter with an epicenter near Bhuj. The aftershock activity of the 2001 Bhuj earthquake is still ongoing with migration of seismicity. Initially, epicenters migrated towards the east and northeast within the Kachchh region but, since 2007, it has also migrated to the south. The triggered faults are mostly within 100 km and some up to 200 km distance from the epicentral area of the mainshock. Most of these faults are trending in E-W direction, and some are transverse. It was noticed that some faults generate earthquakes down to the Moho depth whereas some faults show earthquake activity within the upper crustal volume. The Gedi Fault, situated about 50 km northeast of the 2001 mainshock epicenter, triggered the largest earthquake of Mw 5.6 in 2006. We have carried out detailed seismological studies to evaluate the seismic potential of the Gedi Fault. We have relocated 331 earthquakes by HypoDD to improve upon location errors. Further, the relocated events are used to estimate the b value, p value, and fractal correlation dimension Dc of the fault zone. The present study indicates that all the events along the Gedi Fault are shallow in nature, with focal depths less than 20 km. The estimated b value shows that the Gedi aftershock sequence could be classified as Mogi's type 2 sequence, and the p value suggests a relatively slow decay of aftershocks. The fault plane solutions of some selected events of Mw > 3.5 are examined, and activeness of the Gedi Fault is assessed from the results of active fault studies as well as GPS and InSAR results. All these results are critically examined to evaluate the material properties and seismic potential of the Gedi Fault that may be useful for seismic hazard assessment in the region.

  7. Identification of Geomorphic Conditions Favoring Preservation of Multiple Individual Displacements Across Transform Faults

    NASA Astrophysics Data System (ADS)

    Williams, P. L.; Phillips, D. A.; Bowles-Martinez, E.; Masana, E.; Stepancikova, P.

    2010-12-01

    Terrestrial and airborne LiDAR data, and low altitude aerial photography have been utilized in conjunction with field work to identify and map single and multiple-event stream-offsets along all strands of the San Andreas fault in the Coachella Valley. Goals of the work are characterizing the range of displacements associated with the fault’s prehistoric surface ruptures, evaluating patterns of along-fault displacement, and disclosing processes associated with the prominent Banning-Mission Creek fault junction. Preservation offsets is associated with landscape conditions including: (1) well-confined and widely spaced source streams up-slope of the fault; (2) persistent geomorphic surfaces below the fault; (3) slope directions oriented approximately perpendicular to the fault. Notably, a pair of multiple-event offset sites have been recognized in coarse fan deposits below the Mission Creek fault near 1000 Palms oasis. Each of these sites is associated with a single source drainage oriented approximately perpendicular to the fault, and preserves a record of individual fault displacements affecting the southern portion of the Mission Creek branch of the San Andreas fault. The two sites individually record long (>10 event) slip-per-event histories. Documentation of the sites indicates a prevalence of moderate displacements and a small number of large offsets. This is consistent with evidence developed in systematic mapping of individual and multiple event stream offsets in the area extending 70 km south to Durmid Hill. Challenges to site interpretation include the presence of closely spaced en echelon fault branches and indications of stream avulsion in the area of the modern fault crossing. Conversely, strong bar and swale topography produce high quality offset indicators that can be identified across en echelon branches in most cases. To accomplish the detailed mapping needed to fully recover the complex yet well-preserved geomorphic features under investigation, a program of terrestrial laser scanning (TLS) was conducted at the 1000 Palms oasis stream offset sites. Data products and map interpretations will be presented along with initial applications of the study to characterizing San Andreas fault rupture hazard. Continuing work will seek to more fully populate the dataset of larger offsets, evaluate means to objectively date the larger offsets, and, as completely as possible, to characterize magnitudes of past surface ruptures of the San Andreas fault in the Coachella Valley.

  8. Interim reliability-evaluation program: analysis of the Browns Ferry, Unit 1, nuclear plant. Appendix B - system descriptions and fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, S.E.; Poloski, J.P.; Sullivan, W.H.

    1982-07-01

    This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less

  9. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  10. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.

  11. Large transient fault current test of an electrical roll ring

    NASA Technical Reports Server (NTRS)

    Yenni, Edward J.; Birchenough, Arthur G.

    1992-01-01

    The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.

  12. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  13. Fault-tolerant dynamic task graph scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space andmore » time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.« less

  14. Human-centered design (HCD) of a fault-finding application for mobile devices and its impact on the reduction of time in fault diagnosis in the manufacturing industry.

    PubMed

    Kluge, Annette; Termer, Anatoli

    2017-03-01

    The present article describes the design process of a fault-finding application for mobile devices, which was built to support workers' performance by guiding them through a systematic strategy to stay focused during a fault-finding process. In collaboration with a project partner in the manufacturing industry, a fault diagnosis application was conceptualized based on a human-centered design approach (ISO 9241-210:2010). A field study with 42 maintenance workers was conducted for the purpose of evaluating the performance enhancement of fault finding in three different scenarios as well as for assessing the workers' acceptance of the technology. Workers using the mobile device application were twice as fast at fault finding as the control group without the application and perceived the application as very useful. The results indicate a vast potential of the mobile application for fault diagnosis in contemporary manufacturing systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Robustness and percolation of holes in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Andu; Maletić, Slobodan; Zhao, Yi

    2018-07-01

    Efficient robustness and fault tolerance of complex network is significantly influenced by its connectivity, commonly modeled by the structure of pairwise relations between network elements, i.e., nodes. Nevertheless, aggregations of nodes build higher-order structures embedded in complex network, which may be more vulnerable when the fraction of nodes is removed. The structure of higher-order aggregations of nodes can be naturally modeled by simplicial complexes, whereas the removal of nodes affects the values of topological invariants, like the number of higher-dimensional holes quantified with Betti numbers. Following the methodology of percolation theory, as the fraction of nodes is removed, new holes appear, which have the role of merger between already present holes. In the present article, relationship between the robustness and homological properties of complex network is studied, through relating the graph-theoretical signatures of robustness and the quantities derived from topological invariants. The simulation results of random failures and intentional attacks on networks suggest that the changes of graph-theoretical signatures of robustness are followed by differences in the distribution of number of holes per cluster under different attack strategies. In the broader sense, the results indicate the importance of topological invariants research for obtaining further insights in understanding dynamics taking place over complex networks.

  16. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  17. Sensor placement for diagnosability in space-borne systems - A model-based reasoning approach

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doyle, Richard; Rouquette, Nicolas

    1992-01-01

    This paper presents an approach to evaluating sensor placements on the basis of how well they are able to discriminate between a given fault and normal operating modes and/or other fault modes. In this approach, a model of the system in both normal operations and fault modes is used to evaluate possible sensor placements upon the basis of three criteria. Discriminability measures how much of a divergence in expected sensor readings the two system modes can be expected to produce. Accuracy measures confidence in the particular model predictions. Timeliness measures how long after the fault occurrence the expected divergence will take place. These three metrics then can be used to form a recommendation for a sensor placement. This paper describes how these measures can be computed and illustrated these methods with a brief example.

  18. Development, Interaction and Linkage of Normal Fault Segments along the 100-km Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.

    2016-12-01

    Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.

  19. Regional Survey of Structural Properties and Cementation Patterns of Fault Zones in the Northern Part of the Albuquerque Basin, New Mexico - Implications for Ground-Water Flow

    USGS Publications Warehouse

    Minor, Scott A.; Hudson, Mark R.

    2006-01-01

    Motivated by the need to document and evaluate the types and variability of fault zone properties that potentially affect aquifer systems in basins of the middle Rio Grande rift, we systematically characterized structural and cementation properties of exposed fault zones at 176 sites in the northern Albuquerque Basin. A statistical analysis of measurements and observations evaluated four aspects of the fault zones: (1) attitude and displacement, (2) cement, (3) lithology of the host rock or sediment, and (4) character and width of distinctive structural architectural components at the outcrop scale. Three structural architectural components of the fault zones were observed: (1) outer damage zones related to fault growth; these zones typically contain deformation bands, shear fractures, and open extensional fractures, which strike subparallel to the fault and may promote ground-water flow along the fault zone; (2) inner mixed zones composed of variably entrained, disrupted, and dismembered blocks of host sediment; and (3) central fault cores that accommodate most shear strain and in which persistent low- permeability clay-rich rocks likely impede the flow of water across the fault. The lithology of the host rock or sediment influences the structure of the fault zone and the width of its components. Different grain-size distributions and degrees of induration of the host materials produce differences in material strength that lead to variations in width, degree, and style of fracturing and other fault-related deformation. In addition, lithology of the host sediment appears to strongly control the distribution of cement in fault zones. Most faults strike north to north-northeast and dip 55? - 77? east or west, toward the basin center. Most faults exhibit normal slip, and many of these faults have been reactivated by normal-oblique and strike slip. Although measured fault displacements have a broad range, from 0.9 to 4,000 m, most are <100 m, and fault zones appear to have formed mainly at depths less than 1,000 m. Fault zone widths do not exceed 40 m (median width = 15.5 m). The mean width of fault cores (0.1 m) is nearly one order of magnitude less than that of mixed zones (0.75 m) and two orders of magnitude less than that of damage zones (9.7 m). Cements, a proxy for localized flow of ancient ground water, are common along fault zones in the basin. Silica cements are limited to faults that are near and strike north to northwest toward the Jemez volcanic field north of the basin, whereas carbonate fault cements are widely distributed. Coarse sediments (gravel and sand) host the greatest concentrations of cement within fault zones. Cements fill some extension fractures and, to a lesser degree, are concentrated along shear fractures and deformation bands within inner damage zones. Cements are commonly concentrated in mixed zones and inner damage zones on one side of a fault and thus are asymmetrically distributed within a fault zone, but cement does not consistently lie on the basinward side of faults. From observed spatial patterns of asymmetrically distributed fault zone cements, we infer that ancient ground-water flow was commonly localized along, and bounded by, faults in the basin. It is apparent from our study that the Albuquerque Basin contains a high concentration of faults. The geometry of, internal structure of, and cement and clay distribution in fault zones have created and will continue to create considerable heterogeneity of permeability within the basin aquifers. The characteristics and statistical range of fault zone features appear to be predictable and consistent throughout the basin; this predictability can be used in ground-water flow simulations that consider the influence of faults.

  20. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  1. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  2. Structural Analysis and Evolution of the Kashan (Qom-Zefreh) Fault, Central Iran

    NASA Astrophysics Data System (ADS)

    Safaei, H.; Taheri, A.; Vaziri-Moghaddam, H.

    The main objectives of this research were to identify the geometry and structure of the Qom-Zefreh fault and to determine the extent of its effects on stratigraphy and facies changes. The identification of movement mechanism of major faults in basement, extent and time of their activities are important effects for evaluation of paleogeography of the Iran plateau. In the Orumieh-Dokhtar volcanic band, there are nearly parallel faults to the Zagros Zone. These faults were formed during closure of the Neothetys and collision of the Arabic plate with crust of Iran. The Qom-Zefreh fault is one of these faults, which is known as having four different trend faults. The result indicates that, this fault is not divided in four segments with different trends but the major trend is of Central section, which is the Kashan segment with AZ140 trend and other segments are just related faults. Thus the name of the Kashan fault is recommended for this fault. The mechanism of the Kashan fault is dextral transpression and other related faults in the region are in good correlation with fractures in a dextral transpression system. The stratigraphic studies conducted on the present formations show the effect of fault movements in Upper Cretaceous sedimentary basin. Lack of noticeable changes in Lower Cretaceous sediments and before that indicates that, the fault system activity has been started from the Upper Cretaceous. Thus, based upon these results, the effect of the Neothetys sea closure in this region could be considered at least from the Upper Cretaceous.

  3. Strain distribution across magmatic margins during the breakup stage: Seismicity patterns in the Afar rift zone

    NASA Astrophysics Data System (ADS)

    Brown, C.; Ebinger, C. J.; Belachew, M.; Gregg, T.; Keir, D.; Ayele, A.; Aronovitz, A.; Campbell, E.

    2008-12-01

    Fault patterns record the strain history along passive continental margins, but geochronological constraints are, in general, too sparse to evaluate these patterns in 3D. The Afar depression in Ethiopia provides a unique setting to evaluate the time and space relations between faulting and magmatism across an incipient passive margin that formed above a mantle plume. The margin comprises a high elevation flood basalt province with thick, underplated continental crust, a narrow fault-line escarpment underlain by stretched and intruded crust, and a broad zone of highly intruded, mafic crust lying near sealevel. We analyze fault and seismicity patterns across and along the length of the Afar rift zone to determine the spatial distribution of strain during the final stages of continental breakup, and its relation to active magmatism and dike intrusions. Seismicity data include historic data and 2005-2007 data from the collaborative US-UK-Ethiopia Afar Geodynamics Project that includes the 2005-present Dabbahu rift episode. Earthquake epicenters cluster within discrete, 50 km-long magmatic segments that lack any fault linkage. Swarms also cluster along the fault-line scarp between the unstretched and highly stretched Afar rift zone; these earthquakes may signal release of stresses generated by large lateral density contrasts. We compare Coulomb static stress models with focal mechanisms and fault kinematics to discriminate between segmented magma intrusion and crank- arm models for the central Afar rift zone.

  4. The influence of geologic structures on deformation due to ground water withdrawal.

    PubMed

    Burbey, Thomas J

    2008-01-01

    A 62 day controlled aquifer test was conducted in thick alluvial deposits at Mesquite, Nevada, for the purpose of monitoring horizontal and vertical surface deformations using a high-precision global positioning system (GPS) network. Initial analysis of the data indicated an anisotropic aquifer system on the basis of the observed radial and tangential deformations. However, new InSAR data seem to indicate that the site may be bounded by an oblique normal fault as the subsidence bowl is both truncated to the northwest and offset from the pumping well to the south. A finite-element numerical model was developed using ABAQUS to evaluate the potential location and hydromechanical properties of the fault based on the observed horizontal deformations. Simulation results indicate that for the magnitude and direction of motion at the pumping well and at other GPS stations, which is toward the southeast (away from the inferred fault), the fault zone (5 m wide) must possess a very high permeability and storage coefficient and cross the study area in a northeast-southwest direction. Simulated horizontal and vertical displacements that include the fault zone closely match observed displacements and indicate the likelihood of the presence of the inferred fault. This analysis shows how monitoring horizontal displacements can provide valuable information about faults, and boundary conditions in general, in evaluating aquifer systems during an aquifer test.

  5. Fault region localization (FRL): collaborative product and process improvement based on field performance

    NASA Astrophysics Data System (ADS)

    Mannar, Kamal; Ceglarek, Darek

    2005-11-01

    Customer feedback in the form of warranty/field performance is an important and direct indicator of quality and robustness of a product. Linking warranty information to manufacturing measurements can identify key design parameters and process variables (DPs and PVs) that are related to warranty failures. Warranty data has been traditionally used in reliability studies to determine failure distributions and warranty cost. This paper proposes a novel Fault Region Localization (FRL) methodology to map warranty failures to manufacturing measurements (hence to DPs/PVs) to diagnose warranty failures and perform tolerance revaluation. The FRL methodology consists of two parts: 1. Identifying relations between warranty failures and DPs and PVs using the Generalized Rough Set (GRS) method. GRS is a supervised learning technique to identify specific DPs and PVs related to the given warranty failures and then determining the corresponding Warranty Fault Regions (WFR), Normal Region (NR) and Boundary region (BND). GRS expands traditional Rough Set method by allowing inclusion of noise and uncertainty of warranty data classes. 2. Revaluating the original tolerances of DPs/PVs based on the WFR and BND region identified. The FRL methodology is illustrated using case studies based on two warranty failures from the electronics industry.

  6. Motion-Based System Identification and Fault Detection and Isolation Technologies for Thruster Controlled Spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Sutter, David W.; Berkovitz, Dustin; Betts, Bradley J.; Kong, Edmund; delMundo, Rommel; Lages, Christopher R.; Mah, Robert W.; Papasin, Richard

    2003-01-01

    By analyzing the motions of a thruster-controlled spacecraft, it is possible to provide on-line (1) thruster fault detection and isolation (FDI), and (2) vehicle mass- and thruster-property identification (ID). Technologies developed recently at NASA Ames have significantly improved the speed and accuracy of these ID and FDI capabilities, making them feasible for application to a broad class of spacecraft. Since these technologies use existing sensors, the improved system robustness and performance that comes with the thruster fault tolerance and system ID can be achieved through a software-only implementation. This contrasts with the added cost, mass, and hardware complexity commonly required by FDI. Originally developed in partnership with NASA - Johnson Space Center to provide thruster FDI capability for the X-38 during re-entry, these technologies are most recently being applied to the MIT SPHERES experimental spacecraft to fly on the International Space Station in 2004. The model-based FDI uses a maximum-likelihood calculation at its core, while the ID is based upon recursive least squares estimation. Flight test results from the SPHERES implementation, as flown aboard the NASA KC-1 35A 0-g simulator aircraft in November 2003 are presented.

  7. A Comprehensive Overview of the Duvernay Induced Seismicity near Fox Creek, Alberta

    NASA Astrophysics Data System (ADS)

    Schultz, R.; Wang, R.; Gu, Y. J.; Haug, K.; Atkinson, G. M.

    2016-12-01

    In this work we summarize the current state of understanding regarding the induced seismicity related to Duvernay hydraulic fracturing operations in central Alberta, near the town of Fox Creek. Earthquakes in this region cluster into distinct sequences in time, space, and focal mechanism. To corroborate this point, we use cross-correlation detection methods to delineate transient temporal relationships, double-difference relocations to confirm spatial clustering, and moment tensor determinations to show fault motion consistency. The spatiotemporal clustering of sequences is strongly related to nearby hydraulic fracturing operations. In addition, we identify a strong preference for subvertical strike-slip motion with a roughly 45º P-axis orientation, consistent with ambient stress field considerations. The hypocentral geometry in two red traffic light protocol cases, that are robustly constrained by local array data, provide compelling evidence for planar features starting at Duvernay Formation depths and extending into the shallow Precambrian basement. We interpret these features as faults orientated approximately north-south and subvertically, consistent with moment tensor determinations. Finally, we conclude that the primary sequences are best explained as induced events in response to effective stress changes as a result of pore-pressure increase along previously existing faults due to hydraulic fracturing stimulations.

  8. Structural style and hydrocarbon trap of Karbasi anticline, in the Interior Fars region, Zagros, Iran

    NASA Astrophysics Data System (ADS)

    Maleki, Z.; Arian, M.; Solgi, A.

    2014-07-01

    Karbasi anticline between west-northwest parts of Jahrom town is located in northwest 40 km distance of Aghar gas anticline in interior Fars region. This anticline has asymmetric structure and some faults with large strike separation observed in its structure. The operation of Nezamabad sinistral strike slip fault in west part of this anticline caused fault plunge change in this region. Because of complication increasing of structures geometry in Fars region and necessity to exploration activities for deeper horizons especially the Paleozoic ones, the analysis of fold style elements, which is known as one of the main parts in structural studies seems necessary. In this paper because of some reasons such as Karbasi anticline structural complication, importance of drilling and hydrocarbon explorations in Fars region, it is proceed to analysis and evaluation of fold style elements and geometry with emphasis on Nezamabad fault operation in Interior Fars region. According to fold style elements analysis results, it became clear that in east part of anticline the type of fold horizontal moderately inclined and in west part it is upright moderately plunging, so west evaluation of anticline is affected by more deformation. In this research the relationship present faults especially the Nezamabad sinistral strike slip one with folding and its affection on Dehram horizon and Bangestan group were modeled. Based on received results may be the Nezamabad fault is located between G-G' and E-E' structural sections and this fault in this area operated same as fault zone. In different parts of Karbasi anticline, Dashtak formation as a middle detachment unit plays an important role in connection to folding geometry, may be which is affected by Nezamabad main fault.

  9. Beyond-laboratory-scale prediction for channeling flows through subsurface rock fractures with heterogeneous aperture distributions revealed by laboratory evaluation

    NASA Astrophysics Data System (ADS)

    Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi

    2015-01-01

    The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.

  10. A fault-tolerant strategy based on SMC for current-controlled converters

    NASA Astrophysics Data System (ADS)

    Azer, Peter M.; Marei, Mostafa I.; Sattar, Ahmed A.

    2018-05-01

    The sliding mode control (SMC) is used to control variable structure systems such as power electronics converters. This paper presents a fault-tolerant strategy based on the SMC for current-controlled AC-DC converters. The proposed SMC is based on three sliding surfaces for the three legs of the AC-DC converter. Two sliding surfaces are assigned to control the phase currents since the input three-phase currents are balanced. Hence, the third sliding surface is considered as an extra degree of freedom which is utilised to control the neutral voltage. This action is utilised to enhance the performance of the converter during open-switch faults. The proposed fault-tolerant strategy is based on allocating the sliding surface of the faulty leg to control the neutral voltage. Consequently, the current waveform is improved. The behaviour of the current-controlled converter during different types of open-switch faults is analysed. Double switch faults include three cases: two upper switch fault; upper and lower switch fault at different legs; and two switches of the same leg. The dynamic performance of the proposed system is evaluated during healthy and open-switch fault operations. Simulation results exhibit the various merits of the proposed SMC-based fault-tolerant strategy.

  11. Performance Evaluation of Cloud Service Considering Fault Recovery

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Tan, Feng; Dai, Yuan-Shun; Guo, Suchang

    In cloud computing, cloud service performance is an important issue. To improve cloud service reliability, fault recovery may be used. However, the use of fault recovery could have impact on the performance of cloud service. In this paper, we conduct a preliminary study on this issue. Cloud service performance is quantified by service response time, whose probability density function as well as the mean is derived.

  12. Fault zone structure and inferences on past activities of the active Shanchiao Fault in the Taipei metropolis, northern Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C.; Lee, J.; Chan, Y.; Lu, C.

    2010-12-01

    The Taipei Metropolis, home to around 10 million people, is subject to seismic hazard originated from not only distant faults or sources scattered throughout the Taiwan region, but also active fault lain directly underneath. Northern Taiwan including the Taipei region is currently affected by post-orogenic (Penglai arc-continent collision) processes related to backarc extension of the Ryukyu subduction system. The Shanchiao Fault, an active normal fault outcropping along the western boundary of the Taipei Basin and dipping to the east, is investigated here for its subsurface structure and activities. Boreholes records in the central portion of the fault were analyzed to document the stacking of post- Last Glacial Maximum growth sediments, and a tulip flower structure is illuminated with averaged vertical slip rate of about 3 mm/yr. Similar fault zone architecture and post-LGM tectonic subsidence rate is also found in the northern portion of the fault. A correlation between geomorphology and structural geology in the Shanchiao Fault zone demonstrates an array of subtle geomorphic scarps corresponds to the branch fault while the surface trace of the main fault seems to be completely erased by erosion and sedimentation. Such constraints and knowledge are crucial in earthquake hazard evaluation and mitigation in the Taipei Metropolis, and in understanding the kinematics of transtensional tectonics in northern Taiwan. Schematic 3D diagram of the fault zone in the central portion of the Shanchiao Fault, displaying regional subsurface geology and its relation to topographic features.

  13. Summary: Experimental validation of real-time fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  14. Neural adaptive observer-based sensor and actuator fault detection in nonlinear systems: Application in UAV.

    PubMed

    Abbaspour, Alireza; Aboutalebi, Payam; Yen, Kang K; Sargolzaei, Arman

    2017-03-01

    A new online detection strategy is developed to detect faults in sensors and actuators of unmanned aerial vehicle (UAV) systems. In this design, the weighting parameters of the Neural Network (NN) are updated by using the Extended Kalman Filter (EKF). Online adaptation of these weighting parameters helps to detect abrupt, intermittent, and incipient faults accurately. We apply the proposed fault detection system to a nonlinear dynamic model of the WVU YF-22 unmanned aircraft for its evaluation. The simulation results show that the new method has better performance in comparison with conventional recurrent neural network-based fault detection strategies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. The design and implementation of on-line monitoring system for UHV compact shunt capacitors

    NASA Astrophysics Data System (ADS)

    Tao, Weiliang; Ni, Xuefeng; Lin, Hao; Jiang, Shengbao

    2017-08-01

    Because of the large capacity and compact structure of the UHV compact shunt capacitor, it is difficult to take effective measures to detect and prevent the faults. If the fault capacitor fails to take timely maintenance, it will pose a threat to the safe operation of the system and the life safety of the maintenance personnel. The development of UHV compact shunt capacitor on-line monitoring system can detect and record the on-line operation information of UHV compact shunt capacitors, analyze and evaluate the early fault warning signs, find out the fault capacitor or the capacitor with fault symptom, to ensure safe and reliable operation of the system.

  16. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 4: FTMP executive summary

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III; Lala, J. H.

    1984-01-01

    The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.

  17. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1985-01-01

    Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.

  18. Repetitive transient extraction for machinery fault diagnosis using multiscale fractional order entropy infogram

    NASA Astrophysics Data System (ADS)

    Xu, Xuefang; Qiao, Zijian; Lei, Yaguo

    2018-03-01

    The presence of repetitive transients in vibration signals is a typical symptom of local faults of rotating machinery. Infogram was developed to extract the repetitive transients from vibration signals based on Shannon entropy. Unfortunately, the Shannon entropy is maximized for random processes and unable to quantify the repetitive transients buried in heavy random noise. In addition, the vibration signals always contain multiple intrinsic oscillatory modes due to interaction and coupling effects between machine components. Under this circumstance, high values of Shannon entropy appear in several frequency bands or high value of Shannon entropy doesn't appear in the optimal frequency band, and the infogram becomes difficult to interpret. Thus, it also becomes difficult to select the optimal frequency band for extracting the repetitive transients from the whole frequency bands. To solve these problems, multiscale fractional order entropy (MSFE) infogram is proposed in this paper. With the help of MSFE infogram, the complexity and nonlinear signatures of the vibration signals can be evaluated by quantifying spectral entropy over a range of scales in fractional domain. Moreover, the similarity tolerance of MSFE infogram is helpful for assessing the regularity of signals. A simulation and two experiments concerning a locomotive bearing and a wind turbine gear are used to validate the MSFE infogram. The results demonstrate that the MSFE infogram is more robust to the heavy noise than infogram and the high value is able to only appear in the optimal frequency band for the repetitive transient extraction.

  19. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  20. Non-Pilot Protection of the HVDC Grid

    NASA Astrophysics Data System (ADS)

    Badrkhani Ajaei, Firouz

    This thesis develops a non-pilot protection system for the next generation power transmission system, the High-Voltage Direct Current (HVDC) grid. The HVDC grid protection system is required to be (i) adequately fast to prevent damages and/or converter blocking and (ii) reliable to minimize the impacts of faults. This study is mainly focused on the Modular Multilevel Converter (MMC) -based HVDC grid since the MMC is considered as the building block of the future HVDC systems. The studies reported in this thesis include (i) developing an enhanced equivalent model of the MMC to enable accurate representation of its DC-side fault response, (ii) developing a realistic HVDC-AC test system that includes a five-terminal MMC-based HVDC grid embedded in a large interconnected AC network, (iii) investigating the transient response of the developed test system to AC-side and DC-side disturbances in order to determine the HVDC grid protection requirements, (iv) investigating the fault surge propagation in the HVDC grid to determine the impacts of the DC-side fault location on the measured signals at each relay location, (v) designing a protection algorithm that detects and locates DC-side faults reliably and sufficiently fast to prevent relay malfunction and unnecessary blocking of the converters, and (vi) performing hardware-in-the-loop tests on the designed relay to verify its potential to be implemented in hardware. The results of the off-line time domain transients studies in the PSCAD software platform and the real-time hardware-in-the-loop tests using an enhanced version of the RTDS platform indicate that the developed HVDC grid relay meets all technical requirements including speed, dependability, security, selectivity, and robustness. Moreover, the developed protection algorithm does not impose considerable computational burden on the hardware.

  1. Quality assessment of reservoirs by means of outcrop data and "discrete fracture network" models: The case history of Rosario de La Frontera (NW Argentina) geothermal system

    NASA Astrophysics Data System (ADS)

    Maffucci, R.; Bigi, S.; Corrado, S.; Chiodi, A.; Di Paolo, L.; Giordano, G.; Invernizzi, C.

    2015-04-01

    We report the results of a systematic study carried out on the fracture systems exposed in the Sierra de La Candelaria anticline, in the central Andean retrowedge of northwestern Argentina. The aim was to elaborate a kinematic model of the anticline and to assess the dimensional and spatial properties of the fracture network characterizing the Cretaceous sandstone reservoir of the geothermal system of Rosario de La Frontera. Special regard was devoted to explore how tectonics may affect fluid circulation at depth and control fluids' natural upwelling at surface. With this aim we performed a Discrete Fracture Network model in order to evaluate the potential of the reservoir of the studied geothermal system. The results show that the Sierra de La Candelaria regional anticline developed according to a kinematic model of transpressional inversion compatible with the latest Andean regional WNW-ESE shortening, acting on a pre-orogenic N-S normal fault. A push-up geometry developed during positive inversion controlling the development of two minor anticlines: Termas and Balboa, separated by further NNW-SSE oblique-slip fault in the northern sector of the regional anticline. Brittle deformation recorded at the outcrop scale is robustly consistent with the extensional and transpressional events recognized at regional scale. In terms of fluid circulation, the NNW-SSE and NE-SW fault planes, associated to the late stage of the positive inversion, are considered the main structures controlling the migration paths of hot fluids from the reservoir to the surface. The results of the fracture modeling performed show that fractures related to the same deformation stage, are characterized by the highest values of secondary permeability. Moreover, the DFN models performed in the reservoir volume indicates that fracture network enhances its permeability: its secondary permeability is of about 49 mD and its fractured portion represents the 0.03% of the total volume.

  2. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the software community. This paper discusses the findings and TR suite informing the FM domain in best practices for FM architectural design, visibility observations, and methods employed for IV&V and mission assurance.

  3. Adaptive control of 5 DOF upper-limb exoskeleton robot with improved safety.

    PubMed

    Kang, Hao-Bo; Wang, Jian-Hui

    2013-11-01

    This paper studies an adaptive control strategy for a class of 5 DOF upper-limb exoskeleton robot with a special safety consideration. The safety requirement plays a critical role in the clinical treatment when assisting patients with shoulder, elbow and wrist joint movements. With the objective of assuring the tracking performance of the pre-specified operations, the proposed adaptive controller is firstly designed to be robust to the model uncertainties. To further improve the safety and fault-tolerance in the presence of unknown large parameter variances or even actuator faults, the adaptive controller is on-line updated according to the information provided by an adaptive observer without additional sensors. An output tracking performance is well achieved with a tunable error bound. The experimental example also verifies the effectiveness of the proposed control scheme. © 2013 ISA. Published by ISA. All rights reserved.

  4. Sensitivity of seafloor bathymetry to climate-driven fluctuations in mid-ocean ridge magma supply.

    PubMed

    Olive, J-A; Behn, M D; Ito, G; Buck, W R; Escartín, J; Howell, S

    2015-10-16

    Recent studies have proposed that the bathymetric fabric of the seafloor formed at mid-ocean ridges records rapid (23,000 to 100,000 years) fluctuations in ridge magma supply caused by sealevel changes that modulate melt production in the underlying mantle. Using quantitative models of faulting and magma emplacement, we demonstrate that, in fact, seafloor-shaping processes act as a low-pass filter on variations in magma supply, strongly damping fluctuations shorter than about 100,000 years. We show that the systematic decrease in dominant seafloor wavelengths with increasing spreading rate is best explained by a model of fault growth and abandonment under a steady magma input. This provides a robust framework for deciphering the footprint of mantle melting in the fabric of abyssal hills, the most common topographic feature on Earth. Copyright © 2015, American Association for the Advancement of Science.

  5. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    NASA Astrophysics Data System (ADS)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-10-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.

  6. Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Mengshoel, Ole

    2008-01-01

    Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.

  7. Observer synthesis for a class of Takagi-Sugeno descriptor system with unmeasurable premise variable. Application to fault diagnosis

    NASA Astrophysics Data System (ADS)

    López-Estrada, F. R.; Astorga-Zaragoza, C. M.; Theilliol, D.; Ponsart, J. C.; Valencia-Palomo, G.; Torres, L.

    2017-12-01

    This paper proposes a methodology to design a Takagi-Sugeno (TS) descriptor observer for a class of TS descriptor systems. Unlike the popular approach that considers measurable premise variables, this paper considers the premise variables depending on unmeasurable vectors, e.g. the system states. This consideration covers a large class of nonlinear systems and represents a real challenge for the observer synthesis. Sufficient conditions to guarantee robustness against the unmeasurable premise variables and asymptotic convergence of the TS descriptor observer are obtained based on the H∞ approach together with the Lyapunov method. As a result, the designing conditions are given in terms of linear matrix inequalities (LMIs). In addition, sensor fault detection and isolation are performed by means of a generalised observer bank. Two numerical experiments, an electrical circuit and a rolling disc system, are presented in order to illustrate the effectiveness of the proposed method.

  8. A rapid calculation system for tsunami propagation in Japan by using the AQUA-MT/CMT solutions

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Suzuki, W.; Yamamoto, N.; Kimura, H.; Takahashi, N.

    2017-12-01

    We developed a rapid calculation system of geodetic deformations and tsunami propagation in and around Japan. The system automatically conducts their forward calculations by using point source parameters estimated by the AQUA system (Matsumura et al., 2006), which analyze magnitude, hypocenter, and moment tensors for an event occurring in Japan in 3 minutes of the origin time at the earliest. An optimized calculation code developed by Nakamura and Baba (2016) is employed for the calculations on our computer server with 12 core processors of Intel Xeon 2.60 GHz. Assuming a homogeneous fault slip in the single fault plane as the source fault, the developed system calculates each geodetic deformation and tsunami propagation by numerically solving the 2D linear long-wave equations for the grid interval of 1 arc-min from two fault orientations simultaneously; i.e., one fault and its conjugate fault plane. Because fault models based on moment tensor analyses of event data are used, the system appropriately evaluate tsunami propagation even for unexpected events such as normal faulting in the subduction zone, which differs with the evaluation of tsunami arrivals and heights from a pre-calculated database by using fault models assuming typical types of faulting in anticipated source areas (e.g., Tatehata, 1998; Titov et al., 2005; Yamamoto et al., 2016). By the complete automation from event detection to output graphical figures, the calculation results can be available via e-mail and web site in 4 minutes of the origin time at the earliest. For moderate-sized events such as M5 to 6 events, the system helps us to rapidly investigate whether amplitudes of tsunamis at nearshore and offshore stations exceed a noise level or not, and easily identify actual tsunamis at the stations by comparing with obtained synthetic waveforms. In the case of using source models investigated from GNSS data, such evaluations may be difficult because of the low resolution of sources due to a low signal to noise ratio at land stations. For large to huge events in offshore areas, the developed system may be useful to decide to starting or stopping preparations and precautions against tsunami arrivals, because calculation results including arrival times and heights of initial and maximum waves can be rapidly available before their arrivals at coastal areas.

  9. Fault Zone Resistivity Structure and Monitoring at the Taiwan Chelungpu Drilling Project from AMT data

    NASA Astrophysics Data System (ADS)

    Chiang, C.-W.; Unsworth, M. J.; Chen, C.-S.; Chen, C.-C.; Lin, A.-T.; Hsu, H.-L.

    2009-04-01

    The Chi-Chi earthquake occurred on September 21st, 1999 in the Western Foothills of central Taiwan. This Mw=7.6 earthquake produced a 90 km long surface rupture and caused severe damage across Taiwan. The coseismic displacement on the Chelungpu fault was one of the largest ever observed. The Taiwan Chelungpu drilling project (TCDP) began in 2003 and resulted in a 2,000 m well that recovered cores from the fault zone at A-hole and finished in 2005 with two boreholes (A-hole and B-hole) being completed. The Chelungpu fault that caused the Chi-Chi earthquake was observed in the core at a depth of 1,111 m (FAZ1111). Another fault zone (Sanyi Fault - FAZ1710) was observed at depths of 1,500~1,710 m. Since the electrical resistivity of rocks is sensitive to the presence of fluids, geophysical methods that remotely sense sub-surface resistivity, such as Magnetotellurics (MT), can be a powerful tool in investigating the fluid distribution in the shallow crust. The effectiveness of MT in imaging fault zones has been demonstrated by studies of the San Andreas Fault zone in California, the U.S. and elsewhere. In magnetotellurics, the depth of exploration increases as the signal frequency decreases. Thus for imaging shallow fault zone structure at the TCDP site, the higher frequency audio-magnetotelluric (AMT) method is the most suitable. In this paper, AMT data collected at the TCDP site from 2004 to 2006 are presented. Spatial and temporal variations are described and interpreted in terms of the tectonic setting. Audio-magnetotelluric (AMT) measurements were used to investigate electrical resistivity structure at the TCDP site from 2004~2006. These data show a geoelectric strike direction of N15°E to N30°E. Inversion and forward modeling of the AMT data were used to generate a 1-D resistivity model that has a prominent low resistivity zone (< 10 ohm-m) between depths of 1,100 and 1,500 m. When combined with porosity measurements, the AMT measurements imply that the ground water has a resistivity of 0.55 ohm-m at the depth of the fault zone. Time variations in the measured AMT data were observed from 2004~2005 with maximum changes of 43% in apparent resistivity and 18° in phase. The change in apparent resistivity is greatest in the 1,000~100 Hz frequency band. These frequencies are sensitive to the resistivity structure of the upper 500 m of the hanging wall of the Chelungpu Fault. The decrease in resistivity over time appears to be robust and could be caused by an increase in porosity and a re-distribution of the groundwater.

  10. Ground Motions Due to Earthquakes on Creeping Faults

    NASA Astrophysics Data System (ADS)

    Harris, R.; Abrahamson, N. A.

    2014-12-01

    We investigate the peak ground motions from the largest well-recorded earthquakes on creeping strike-slip faults in active-tectonic continental regions. Our goal is to evaluate if the strong ground motions from earthquakes on creeping faults are smaller than the strong ground motions from earthquakes on locked faults. Smaller ground motions might be expected from earthquakes on creeping faults if the fault sections that strongly radiate energy are surrounded by patches of fault that predominantly absorb energy. For our study we used the ground motion data available in the PEER NGA-West2 database, and the ground motion prediction equations that were developed from the PEER NGA-West2 dataset. We analyzed data for the eleven largest well-recorded creeping-fault earthquakes, that ranged in magnitude from M5.0-6.5. Our findings are that these earthquakes produced peak ground motions that are statistically indistinguishable from the peak ground motions produced by similar-magnitude earthquakes on locked faults. These findings may be implemented in earthquake hazard estimates for moderate-size earthquakes in creeping-fault regions. Further investigation is necessary to determine if this result will also apply to larger earthquakes on creeping faults. Please also see: Harris, R.A., and N.A. Abrahamson (2014), Strong ground motions generated by earthquakes on creeping faults, Geophysical Research Letters, vol. 41, doi:10.1002/2014GL060228.

  11. Mantle uplift and exhumation caused by long-lived transpression at a major transform fault

    NASA Astrophysics Data System (ADS)

    Maia, Marcia; Sichel, Susanna; Briais, Anne; Brunelli, Daniele; Ligi, Marco; Campos, Thomas; Mougel, Bérengère; Hémond, Christophe

    2017-04-01

    Large portions of slow-spreading ridges have mantle-derived peridotites emplaced either on, or at shallow levels below the sea floor. Mantle and deep rock exposure in such contexts results from extension through low-angle detachment faults at oceanic core complexes or, along transform faults, to transtension due to small changes in spreading geometry. In the Equatorial Atlantic, a large body of ultramafic rocks at the large-offset St. Paul transform fault forms the archipelago of St. Peter & St. Paul. These islets are emplaced near the axis of the Mid-Atlantic Ridge (MAR), and have intrigued geologists since Darwin's time. They are made of variably serpentinized and mylonitized peridotites, and are presently being uplifted at a rate of 1.5 mm/yr, which suggests tectonic stresses. The existence of an abnormally cold upper mantle or cold lithosphere in the Equatorial Atlantic was, until now, the preferred explanation for the origin of these ultramafics. High-resolution geophysical data and rock samples acquired in 2013 show that the origin of the St. Peter & St. Paul archipelago is linked to compressive stresses along the transform fault. The islets represent the summit of a large push-up ridge formed by deformed mantle rocks, located in the center of a positive flower structure, where large portions of mylonitized mantle are uplifted. The transpressive stress field can be explained by the propagation of the northern MAR segment into the transform domain. The latter induced the overlap of ridge segments, resulting in the migration and segmentation of the transform fault and the creation of a series of restraining step-overs. A counterclockwise change in plate motion at 11 Ma initially generated extensive stresses in the transform domain, forming a flexural transverse ridge. Shortly after the plate reorganization, the MAR segment located on the northern side of the transform fault started to propagate southwards, adjusting to the new spreading direction. Enhanced melt supply at the ridge axis, possibly due to the Sierra Leone thermal anomaly, induced the robust response of this segment.

  12. Developing sub 5-m LiDAR DEMs for forested sections of the Alpine and Hope faults, South Island, New Zealand: Implications for structural interpretations

    NASA Astrophysics Data System (ADS)

    Langridge, R. M.; Ries, W. F.; Farrier, T.; Barth, N. C.; Khajavi, N.; De Pascale, G. P.

    2014-07-01

    Kilometre-wide airborne light detection and ranging (LiDAR) surveys were collected along portions of the Alpine and Hope faults in New Zealand to assess the potential for generating sub 5-m bare earth digital elevation models (DEMs) from ground return data in areas of dense rainforest (bush) cover as an aid to mapping these faults. The 34-km long Franz-Whataroa LiDAR survey was flown along the densely-vegetated central-most portion of the transpressive Alpine Fault. Six closely spaced flight lines (200 m apart) yielded survey coverage with double overlap of swath collection, which was considered necessary due to the low density of ground returns (0.16 m-2 or a point every 6 m2) under mature West Coast podocarp-broadleaf rainforest. This average point spacing (˜2.5 m) allowed for the generation of a robust, high quality 3-m bare earth DEM. The DEM confirmed the zigzagged form of the surface trace of the Alpine Fault in this area, originally recognised by Norris and Cooper (1995, 1997) and highlights that the surface strike variations are more variant than previously mapped. The 29-km long Hurunui-Hope LiDAR survey was flown east of the Main Divide of the Southern Alps along the dextral-slip Hope Fault, where the terrain is characterised by lower rainfall and more open beech forest. Flight line spacings of ˜275 m were used to generate a DEM from the ground return data. The average ground return values under beech forest were 0.27 m-2 and yielded an estimated cell size suitable for a 2-m DEM. In both cases the LiDAR revealed unprecedented views of the surface geomorphology of these active faults. Lessons learned from our survey methodologies can be employed to plan cost-effective, high-gain airborne surveys to yield bare earth DEMs underneath vegetated terrain and multi-storeyed canopies from densely forested environments across New Zealand and worldwide.

  13. Evaluating and extending user-level fault tolerance in MPI applications

    DOE PAGES

    Laguna, Ignacio; Richards, David F.; Gamblin, Todd; ...

    2016-01-11

    The user-level failure mitigation (ULFM) interface has been proposed to provide fault-tolerant semantics in the Message Passing Interface (MPI). Previous work presented performance evaluations of ULFM; yet questions related to its programability and applicability, especially to non-trivial, bulk synchronous applications, remain unanswered. In this article, we present our experiences on using ULFM in a case study with a large, highly scalable, bulk synchronous molecular dynamics application to shed light on the advantages and difficulties of this interface to program fault-tolerant MPI applications. We found that, although ULFM is suitable for master–worker applications, it provides few benefits for more common bulkmore » synchronous MPI applications. Furthermore, to address these limitations, we introduce a new, simpler fault-tolerant interface for complex, bulk synchronous MPI programs with better applicability and support than ULFM for application-level recovery mechanisms, such as global rollback.« less

  14. Architecture of buried reverse fault zone in the sedimentary basin: A case study from the Hong-Che Fault Zone of the Junggar Basin

    NASA Astrophysics Data System (ADS)

    Liu, Yin; Wu, Kongyou; Wang, Xi; Liu, Bo; Guo, Jianxun; Du, Yannan

    2017-12-01

    It is widely accepted that the faults can act as the conduits or the barrier for oil and gas migration. Years of studies suggested that the internal architecture of a fault zone is complicated and composed of distinct components with different physical features, which can highly influence the migration of oil and gas along the fault. The field observation is the most useful methods of observing the fault zone architecture, however, in the petroleum exploration, what should be concerned is the buried faults in the sedimentary basin. Meanwhile, most of the studies put more attention on the strike-slip or normal faults, but the architecture of the reverse faults attracts less attention. In order to solve these questions, the Hong-Che Fault Zone in the northwest margin of the Junggar Basin, Xinjiang Province, is chosen for an example. Combining with the seismic data, well logs and drill core data, we put forward a comprehensive method to recognize the internal architectures of buried faults. High-precision seismic data reflect that the fault zone shows up as a disturbed seismic reflection belt. Four types of well logs, which are sensitive to the fractures, and a comprehensive discriminated parameter, named fault zone index are used in identifying the fault zone architecture. Drill core provides a direct way to identify different components of the fault zone, the fault core is composed of breccia, gouge, and serpentinized or foliated fault rocks and the damage zone develops multiphase of fractures, which are usually cemented. Based on the recognition results, we found that there is an obvious positive relationship between the width of the fault zone and the displacement, and the power-law relationship also exists between the width of the fault core and damage zone. The width of the damage zone in the hanging wall is not apparently larger than that in the footwall in the reverse fault, showing different characteristics with the normal fault. This study provides a comprehensive method in identifying the architecture of buried faults in the sedimentary basin and would be helpful in evaluating the fault sealing behavior.

  15. A recent Mw 4.3 earthquake proving activity of a shallow strike-slip fault in the northern part of the Western Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Ezzelarab, Mohamed; Ebraheem, Mohamed O.; Zahradník, Jiří

    2018-03-01

    The Mw 4.3 earthquake of September 2015 is the first felt earthquake since 1900 A.D in the northern part of the Western Desert, Egypt, south of the El-Alamein City. The available waveform data observed at epicentral distances 52-391 km was collected and carefully evaluated. Nine broad-band stations were selected to invert full waveforms for the centroid position (horizontal and vertical) and for the focal mechanism solution. The first-arrival travel times, polarities and low-frequency full waveforms (0.03-0.08 Hz) are consistently explained in this paper as caused by a shallow source of the strike-slip mechanism. This finding indicates causal relation of this earthquake to the W-E trending South El-Alamein fault, which developed in Late Cretaceous as dextral strike slip fault. Recent activity of this fault, proven by the studied rare earthquake, is of fundamental importance for future seismic hazard evaluations, underlined by proximity (∼65 km) of the source zone to the first nuclear power plant planned site in Egypt. Safe exploration and possible future exploitation of hydrocarbon reserves, reported around El-Alamein fault in the last decade, cannot be made without considering the seismic potential of this fault.

  16. Flight Tests of a Remaining Flying Time Prediction System for Small Electric Aircraft in the Presence of Faults

    NASA Technical Reports Server (NTRS)

    Hogge, Edward F.; Kulkarni, Chetan S.; Vazquez, Sixto L.; Smalling, Kyle M.; Strom, Thomas H.; Hill, Boyd L.; Quach, Cuong C.

    2017-01-01

    This paper addresses the problem of building trust in the online prediction of a battery powered aircraft's remaining flying time. A series of flight tests is described that make use of a small electric powered unmanned aerial vehicle (eUAV) to verify the performance of the remaining flying time prediction algorithm. The estimate of remaining flying time is used to activate an alarm when the predicted remaining time is two minutes. This notifies the pilot to transition to the landing phase of the flight. A second alarm is activated when the battery charge falls below a specified limit threshold. This threshold is the point at which the battery energy reserve would no longer safely support two repeated aborted landing attempts. During the test series, the motor system is operated with the same predefined timed airspeed profile for each test. To test the robustness of the prediction, half of the tests were performed with, and half were performed without, a simulated powertrain fault. The pilot remotely engages a resistor bank at a specified time during the test flight to simulate a partial powertrain fault. The flying time prediction system is agnostic of the pilot's activation of the fault and must adapt to the vehicle's state. The time at which the limit threshold on battery charge is reached is then used to measure the accuracy of the remaining flying time predictions. Accuracy requirements for the alarms are considered and the results discussed.

  17. Natural roller bearing fault detection by angular measurement of true instantaneous angular speed

    NASA Astrophysics Data System (ADS)

    Renaudin, L.; Bonnardot, F.; Musy, O.; Doray, J. B.; Rémond, D.

    2010-10-01

    The challenge in many production activities involving large mechanical devices like power transmissions consists in reducing the machine downtime, in managing repairs and in improving operating time. Most online monitoring systems are based on conventional vibration measurement devices for gear transmissions or bearings in mechanical components. In this paper, we propose an alternative way of bearing condition monitoring based on the instantaneous angular speed measurement. By the help of a large experimental investigation on two different applications, we prove that localized faults like pitting in bearing generate small angular speed fluctuations which are measurable with optical or magnetic encoders. We also emphasize the benefits of measuring instantaneous angular speed with the pulse timing method through an implicit angular sampling which ensures insensitivity to speed fluctuation. A wide range of operating conditions have been tested for the two applications with varying speed, load, external excitations, gear ratio, etc. The tests performed on an automotive gearbox or on actual operating vehicle wheels also establish the robustness of the proposed methodology. By the means of a conventional Fourier transform, angular frequency channels kinematically related to the fault periodicity show significant magnitude differences related to the damage severity. Sideband effects are evidently seen when the fault is located on rotating parts of the bearing due to load modulation. Additionally, slip effects are also suspected to be at the origin of enlargement of spectrum peaks in the case of double row bearings loaded in a pure radial direction.

  18. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.

  19. Moment magnitude, local magnitude and corner frequency of small earthquakes nucleating along a low angle normal fault in the Upper Tiber valley (Italy)

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.

    2015-12-01

    The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a region characterized by small to moderate seismicity. Finally, we present for a subset of data, corner frequency values computed by spectral analysis of S-waves, using data from three nearby shallow borehole stations sampled at 500 sps.

  20. Mapping apparent stress and energy radiation over fault zones of major earthquakes

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2002-01-01

    Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.

  1. Modelling of 3D fractured geological systems - technique and application

    NASA Astrophysics Data System (ADS)

    Cacace, M.; Scheck-Wenderoth, M.; Cherubini, Y.; Kaiser, B. O.; Bloecher, G.

    2011-12-01

    All rocks in the earth's crust are fractured to some extent. Faults and fractures are important in different scientific and industry fields comprising engineering, geotechnical and hydrogeological applications. Many petroleum, gas and geothermal and water supply reservoirs form in faulted and fractured geological systems. Additionally, faults and fractures may control the transport of chemical contaminants into and through the subsurface. Depending on their origin and orientation with respect to the recent and palaeo stress field as well as on the overall kinematics of chemical processes occurring within them, faults and fractures can act either as hydraulic conductors providing preferential pathways for fluid to flow or as barriers preventing flow across them. The main challenge in modelling processes occurring in fractured rocks is related to the way of describing the heterogeneities of such geological systems. Flow paths are controlled by the geometry of faults and their open void space. To correctly simulate these processes an adequate 3D mesh is a basic requirement. Unfortunately, the representation of realistic 3D geological environments is limited by the complexity of embedded fracture networks often resulting in oversimplified models of the natural system. A technical description of an improved method to integrate generic dipping structures (representing faults and fractures) into a 3D porous medium is out forward. The automated mesh generation algorithm is composed of various existing routines from computational geometry (e.g. 2D-3D projection, interpolation, intersection, convex hull calculation) and meshing (e.g. triangulation in 2D and tetrahedralization in 3D). All routines have been combined in an automated software framework and the robustness of the approach has been tested and verified. These techniques and methods can be applied for fractured porous media including fault systems and therefore found wide applications in different geo-energy related topics including CO2 storage in deep saline aquifers, shale gas extraction and geothermal heat recovery. The main advantage is that dipping structures can be integrated into a 3D body representing the porous media and the interaction between the discrete flow paths through and across faults and fractures and within the rock matrix can be correctly simulated. In addition the complete workflow is captured by open-source software.

  2. Implementing a finite-state off-normal and fault response system for disruption avoidance in tokamaks

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Choi, W.; Hahn, S. H.; Humphreys, D. A.; Sammuli, B. S.; Walker, M. L.

    2018-05-01

    A finite-state off-normal and fault response (ONFR) system is presented that provides the supervisory logic for comprehensive disruption avoidance and machine protection in tokamaks. Robust event handling is critical for ITER and future large tokamaks, where plasma parameters will necessarily approach stability limits and many systems will operate near their engineering limits. Events can be classified as off-normal plasmas events, e.g. neoclassical tearing modes or vertical displacements events, or faults, e.g. coil power supply failures. The ONFR system presented provides four critical features of a robust event handling system: sequential responses to cascading events, event recovery, simultaneous handling of multiple events and actuator prioritization. The finite-state logic is implemented in Matlab®/Stateflow® to allow rapid development and testing in an easily understood graphical format before automated export to the real-time plasma control system code. Experimental demonstrations of the ONFR algorithm on the DIII-D and KSTAR tokamaks are presented. In the most complex demonstration, the ONFR algorithm asynchronously applies ‘catch and subdue’ electron cyclotron current drive (ECCD) injection scheme to suppress a virulent 2/1 neoclassical tearing mode, subsequently shuts down ECCD for machine protection when the plasma becomes over-dense, and enables rotating 3D field entrainment of the ensuing locked mode to allow a safe rampdown, all in the same discharge without user intervention. When multiple ONFR states are active simultaneously and requesting the same actuator (e.g. neutral beam injection or gyrotrons), actuator prioritization is accomplished by sorting the pre-assigned priority values of each active ONFR state and giving complete control of the actuator to the state with highest priority. This early experience makes evident that additional research is required to develop an improved actuator sharing protocol, as well as a methodology to minimize the number and topological complexity of states as the finite-state ONFR system is scaled to a large, highly constrained device like ITER.

  3. Implementing a finite-state off-normal and fault response system for disruption avoidance in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eidietis, N. W.; Choi, W.; Hahn, S. H.

    A finite-state off-normal and fault response (ONFR) system is presented that provides the supervisory logic for comprehensive disruption avoidance and machine protection in tokamaks. Robust event handling is critical for ITER and future large tokamaks, where plasma parameters will necessarily approach stability limits and many systems will operate near their engineering limits. Events can be classified as off-normal plasmas events, e.g. neoclassical tearing modes or vertical displacements events, or faults, e.g. coil power supply failures. The ONFR system presented provides four critical features of a robust event handling system: sequential responses to cascading events, event recovery, simultaneous handling of multiplemore » events and actuator prioritization. The finite-state logic is implemented in Matlab*/Stateflow* to allow rapid development and testing in an easily understood graphical format before automated export to the real-time plasma control system code. Experimental demonstrations of the ONFR algorithm on the DIII-D and KSTAR tokamaks are presented. In the most complex demonstration, the ONFR algorithm asynchronously applies “catch and subdue” electron cyclotron current drive (ECCD) injection scheme to suppress a virulent 2/1 neoclassical tearing mode, subsequently shuts down ECCD for machine protection when the plasma becomes over-dense, and enables rotating 3D field entrainment of the ensuing locked mode to allow a safe rampdown, all in the same discharge without user intervention. When multiple ONFR states are active simultaneously and requesting the same actuator (e.g. neutral beam injection or gyrotrons), actuator prioritization is accomplished by sorting the pre-assigned priority values of each active ONFR state and giving complete control of the actuator to the state with highest priority. This early experience makes evident that additional research is required to develop an improved actuator sharing protocol, as well as a methodology to minimize the number and topological complexity of states as the finite-state ONFR system is scaled to a large, highly constrained device like ITER.« less

  4. Implementing a finite-state off-normal and fault response system for disruption avoidance in tokamaks

    DOE PAGES

    Eidietis, N. W.; Choi, W.; Hahn, S. H.; ...

    2018-03-29

    A finite-state off-normal and fault response (ONFR) system is presented that provides the supervisory logic for comprehensive disruption avoidance and machine protection in tokamaks. Robust event handling is critical for ITER and future large tokamaks, where plasma parameters will necessarily approach stability limits and many systems will operate near their engineering limits. Events can be classified as off-normal plasmas events, e.g. neoclassical tearing modes or vertical displacements events, or faults, e.g. coil power supply failures. The ONFR system presented provides four critical features of a robust event handling system: sequential responses to cascading events, event recovery, simultaneous handling of multiplemore » events and actuator prioritization. The finite-state logic is implemented in Matlab*/Stateflow* to allow rapid development and testing in an easily understood graphical format before automated export to the real-time plasma control system code. Experimental demonstrations of the ONFR algorithm on the DIII-D and KSTAR tokamaks are presented. In the most complex demonstration, the ONFR algorithm asynchronously applies “catch and subdue” electron cyclotron current drive (ECCD) injection scheme to suppress a virulent 2/1 neoclassical tearing mode, subsequently shuts down ECCD for machine protection when the plasma becomes over-dense, and enables rotating 3D field entrainment of the ensuing locked mode to allow a safe rampdown, all in the same discharge without user intervention. When multiple ONFR states are active simultaneously and requesting the same actuator (e.g. neutral beam injection or gyrotrons), actuator prioritization is accomplished by sorting the pre-assigned priority values of each active ONFR state and giving complete control of the actuator to the state with highest priority. This early experience makes evident that additional research is required to develop an improved actuator sharing protocol, as well as a methodology to minimize the number and topological complexity of states as the finite-state ONFR system is scaled to a large, highly constrained device like ITER.« less

  5. Electrical short circuit and current overload tests on aircraft wiring

    NASA Technical Reports Server (NTRS)

    Cahill, Patricia

    1995-01-01

    The findings of electrical short circuit and current overload tests performed on commercial aircraft wiring are presented. A series of bench-scale tests were conducted to evaluate circuit breaker response to overcurrent and to determine if the wire showed any visible signs of thermal degradation due to overcurrent. Three types of wire used in commercial aircraft were evaluated: MIL-W-22759/34 (150 C rated), MIL-W-81381/12 (200 C rated), and BMS 1360 (260 C rated). A second series of tests evaluated circuit breaker response to short circuits and ticking faults. These tests were also meant to determine if the three test wires behaved differently under these conditions and if a short circuit or ticking fault could start a fire. It is concluded that circuit breakers provided reliable overcurrent protection. Circuit breakers may not protect wire from ticking faults but can protect wire from direct shorts. These tests indicated that the appearance of a wire subjected to a current that totally degrades the insulation looks identical to a wire subjected to a fire; however the 'fire exposed' conductor was more brittle than the conductor degraded by overcurrent. Preliminary testing indicates that direct short circuits are not likely to start a fire. Preliminary testing indicated that direct short circuits do not erode insulation and conductor to the extent that ticking faults did. Circuit breakers may not safeguard against the ignition of flammable materials by ticking faults. The flammability of materials near ticking faults is far more important than the rating of the wire insulation material.

  6. Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger

    NASA Astrophysics Data System (ADS)

    Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun

    2011-04-01

    This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.

  7. Sensor fault-tolerant control for gear-shifting engaging process of automated manual transmission

    NASA Astrophysics Data System (ADS)

    Li, Liang; He, Kai; Wang, Xiangyu; Liu, Yahui

    2018-01-01

    Angular displacement sensor on the actuator of automated manual transmission (AMT) is sensitive to fault, and the sensor fault will disturb its normal control, which affects the entire gear-shifting process of AMT and results in awful riding comfort. In order to solve this problem, this paper proposes a method of fault-tolerant control for AMT gear-shifting engaging process. By using the measured current of actuator motor and angular displacement of actuator, the gear-shifting engaging load torque table is built and updated before the occurrence of the sensor fault. Meanwhile, residual between estimated and measured angular displacements is used to detect the sensor fault. Once the residual exceeds a determined fault threshold, the sensor fault is detected. Then, switch control is triggered, and the current observer and load torque table estimates an actual gear-shifting position to replace the measured one to continue controlling the gear-shifting process. Numerical and experiment tests are carried out to evaluate the reliability and feasibility of proposed methods, and the results show that the performance of estimation and control is satisfactory.

  8. Rupture history of 2008 May 12 Mw 8.0 Wen-Chuan earthquake: Evidence of slip interaction

    NASA Astrophysics Data System (ADS)

    Ji, C.; Shao, G.; Lu, Z.; Hudnut, K.; Jiu, J.; Hayes, G.; Zeng, Y.

    2008-12-01

    We will present the rupture process of the May 12, 2008 Mw 8.0 Wenchuan earthquake using all available data. The current model, using both teleseismic body and surface waves and interferometric LOS displacements, reveals an unprecedented complex rupture process which can not be resolved using either of the datasets individually. Rupture of this earthquake involved both the low angle Pengguan fault and the high angle Beichuan fault, which intersect each other at depth and are separated approximately 5-15 km at the surface. Rupture initiated on the Pengguan fault and triggered rupture on the Beichuan fault 10 sec later. The two faults dynamically interacted and unilaterally ruptured over 270 km with an average rupture velocity of 3.0 km/sec. The total seismic moment is 1.1x1021 Nm (Mw 8.0), roughly equally partitioned between the two faults. However, the spatiotemporal evaluations of the two faults are very different. This study will focus on the evidence for fault interactions and will analyze the corresponding uncertainties, in preparation for future dynamic studies of the same detailed nature.

  9. Intelligent Method for Diagnosing Structural Faults of Rotating Machinery Using Ant Colony Optimization

    PubMed Central

    Li, Ke; Chen, Peng

    2011-01-01

    Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called “relative ratio symptom parameters” are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks. PMID:22163833

  10. Intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization.

    PubMed

    Li, Ke; Chen, Peng

    2011-01-01

    Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called "relative ratio symptom parameters" are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks.

  11. A Novel Bearing Multi-Fault Diagnosis Approach Based on Weighted Permutation Entropy and an Improved SVM Ensemble Classifier.

    PubMed

    Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang

    2018-06-14

    Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.

  12. System for detecting and limiting electrical ground faults within electrical devices

    DOEpatents

    Gaubatz, Donald C.

    1990-01-01

    An electrical ground fault detection and limitation system for employment with a nuclear reactor utilizing a liquid metal coolant. Elongate electromagnetic pumps submerged within the liquid metal coolant and electrical support equipment experiencing an insulation breakdown occasion the development of electrical ground fault current. Without some form of detection and control, these currents may build to damaging power levels to expose the pump drive components to liquid metal coolant such as sodium with resultant undesirable secondary effects. Such electrical ground fault currents are detected and controlled through the employment of an isolated power input to the pumps and with the use of a ground fault control conductor providing a direct return path from the affected components to the power source. By incorporating a resistance arrangement with the ground fault control conductor, the amount of fault current permitted to flow may be regulated to the extent that the reactor may remain in operation until maintenance may be performed, notwithstanding the existence of the fault. Monitors such as synchronous demodulators may be employed to identify and evaluate fault currents for each phase of a polyphase power, and control input to the submerged pump and associated support equipment.

  13. GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Koo, Youngju

    2011-04-01

    The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.

  14. Fire safety in transit systems fault tree analysis

    DOT National Transportation Integrated Search

    1981-09-01

    Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...

  15. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  16. Corridors of crestal and radial faults linking salt diapirs in the Espírito Santo Basin, SE Brazil

    NASA Astrophysics Data System (ADS)

    Mattos, Nathalia H.; Alves, Tiago M.

    2018-03-01

    This work uses high-quality 3D seismic data to assess the geometry of fault families around salt diapirs in SE Brazil (Espírito Santo Basin). It aims at evaluating the timings of fault growth, and suggests the generation of corridors for fluid migration linking discrete salt diapirs. Three salt diapirs, one salt ridge, and five fault families were identified based on their geometry and relative locations. Displacement-length (D-x) plots, Throw-depth (T-z) data and structural maps indicate that faults consist of multiple segments that were reactivated by dip-linkage following a preferential NE-SW direction. This style of reactivation and linkage is distinct from other sectors of the Espírito Santo Basin where the preferential mode of reactivation is by upwards vertical propagation. Reactivation of faults above a Mid-Eocene unconformity is also scarce in the study area. Conversely, two halokinetic episodes dated as Cretaceous and Paleogene are interpreted below a Mid-Eocene unconformity. This work is important as it recognises the juxtaposition of permeable strata across faults as marking the generation of fault corridors linking adjacent salt structures. In such a setting, fault modelling shows that fluid will migrate towards the shallower salt structures along the fault corridors first identified in this work.

  17. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  18. Stacking fault density and bond orientational order of fcc ruthenium nanoparticles

    NASA Astrophysics Data System (ADS)

    Seo, Okkyun; Sakata, Osami; Kim, Jae Myung; Hiroi, Satoshi; Song, Chulho; Kumara, Loku Singgappulige Rosantha; Ohara, Koji; Dekura, Shun; Kusada, Kohei; Kobayashi, Hirokazu; Kitagawa, Hiroshi

    2017-12-01

    We investigated crystal structure deviations of catalytic nanoparticles (NPs) using synchrotron powder X-ray diffraction. The samples were fcc ruthenium (Ru) NPs with diameters of 2.4, 3.5, 3.9, and 5.4 nm. We analyzed average crystal structures by applying the line profile method to a stacking fault model and local crystal structures using bond orientational order (BOO) parameters. The reflection peaks shifted depending on rules that apply to each stacking fault. We evaluated the quantitative stacking faults densities for fcc Ru NPs, and the stacking fault per number of layers was 2-4, which is quite large. Our analysis shows that the fcc Ru 2.4 nm-diameter NPs have a considerably high stacking fault density. The B factor tends to increase with the increasing stacking fault density. A structural parameter that we define from the BOO parameters exhibits a significant difference from the ideal value of the fcc structure. This indicates that the fcc Ru NPs are highly disordered.

  19. Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan

    2017-05-01

    Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.

  20. A review on data-driven fault severity assessment in rolling bearings

    NASA Astrophysics Data System (ADS)

    Cerrada, Mariela; Sánchez, René-Vinicio; Li, Chuan; Pacheco, Fannia; Cabrera, Diego; Valente de Oliveira, José; Vásquez, Rafael E.

    2018-01-01

    Health condition monitoring of rotating machinery is a crucial task to guarantee reliability in industrial processes. In particular, bearings are mechanical components used in most rotating devices and they represent the main source of faults in such equipments; reason for which research activities on detecting and diagnosing their faults have increased. Fault detection aims at identifying whether the device is or not in a fault condition, and diagnosis is commonly oriented towards identifying the fault mode of the device, after detection. An important step after fault detection and diagnosis is the analysis of the magnitude or the degradation level of the fault, because this represents a support to the decision-making process in condition based-maintenance. However, no extensive works are devoted to analyse this problem, or some works tackle it from the fault diagnosis point of view. In a rough manner, fault severity is associated with the magnitude of the fault. In bearings, fault severity can be related to the physical size of fault or a general degradation of the component. Due to literature regarding the severity assessment of bearing damages is limited, this paper aims at discussing the recent methods and techniques used to achieve the fault severity evaluation in the main components of the rolling bearings, such as inner race, outer race, and ball. The review is mainly focused on data-driven approaches such as signal processing for extracting the proper fault signatures associated with the damage degradation, and learning approaches that are used to identify degradation patterns with regards to health conditions. Finally, new challenges are highlighted in order to develop new contributions in this field.

  1. Testing fault growth models with low-temperature thermochronology in the northwest Basin and Range, USA

    USGS Publications Warehouse

    Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.

    2016-01-01

    Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.

  2. Assessment of the geodynamical setting around the main active faults at Aswan area, Egypt

    NASA Astrophysics Data System (ADS)

    Ali, Radwan; Hosny, Ahmed; Kotb, Ahmed; Khalil, Ahmed; Azza, Abed; Rayan, Ali

    2013-04-01

    The proper evaluation of crustal deformations in the Aswan region especially around the main active faults is crucial due to the existence of one major artificial structure: the Aswan High Dam. This construction created one of the major artificial lakes: Lake Nasser. The Aswan area is considered as an active seismic area in Egypt since many recent and historical felted earthquakes occurred such as the impressive earthquake occurred on November 14, 1981 at Kalabsha fault with a local magnitude ML=5.7. Lately, on 26 December 2011, a moderate earthquake with a local magnitude Ml=4.1 occurred at Kalabsha area too. The main target of this study is to evaluate the active geological structures that can potentially affect the Aswan High Dam and that are being monitored in detail. For implementing this objective, two different geophysical tools (magnetic, seismic) in addition to the Global Positioning System (GPS) have been utilized. Detailed land magnetic survey was carried out for the total component of geomagnetic field using two proton magnetometers. The obtained magnetic results reveal that there are three major faults parallel {F1 (Kalabsha), F2 (Seiyal) and F3} affecting the area. The most dominant magnetic trend strikes those faults in the WNW-ESE direction. The seismicity and fault plain solutions of the 26 December 2011 earthquake and its two aftershocks have been investigated. The source mechanisms of those events delineate two nodal plains. The trending ENE-WSW to E-W is consistent with the direction of Kalabsha fault and its extension towards east for the events located over it. The trending NNW-SSE to N-S is consistent with the N-S fault trending. The movement along the ENE-WSW plain is right lateral, but it is left lateral along the NNW-SSE plain. Based on the estimated relative motions using GPS, dextral strike-slip motion at the Kalabsha and Seiyal fault systems is clearly identified by changing in the velocity gradient between south and north stations. However, at the area between Kalabha and Seiyal faults, the movement has been changed in a different direction which is consistent with the other set of faults (N-S).

  3. Sedimentary evidence of historical and prehistorical earthquakes along the Venta de Bravo Fault System, Acambay Graben (Central Mexico)

    NASA Astrophysics Data System (ADS)

    Lacan, Pierre; Ortuño, María; Audin, Laurence; Perea, Hector; Baize, Stephane; Aguirre-Díaz, Gerardo; Zúñiga, F. Ramón

    2018-03-01

    The Venta de Bravo normal fault is one of the longest structures in the intra-arc fault system of the Trans-Mexican Volcanic Belt. It defines, together with the Pastores Fault, the 80 km long southern margin of the Acambay Graben. We focus on the westernmost segment of the Venta de Bravo Fault and provide new paleoseismological information, evaluate its earthquake history, and assess the related seismic hazard. We analyzed five trenches, distributed at three different sites, in which Holocene surface faulting offsets interbedded volcanoclastic, fluvio-lacustrine and colluvial deposits. Despite the lack of known historical destructive earthquakes along this fault, we found evidence of at least eight earthquakes during the late Quaternary. Our results indicate that this is one of the major seismic sources of the Acambay Graben, capable of producing by itself earthquakes with magnitudes (MW) up to 6.9, with a slip rate of 0.22-0.24 mm yr- 1 and a recurrence interval between 1940 and 2390 years. In addition, a possible multi-fault rupture of the Venta de Bravo Fault together with other faults of the Acambay Graben could result in a MW > 7 earthquake. These new slip rates, earthquake recurrence rates, and estimation of slips per event help advance our understanding of the seismic hazard posed by the Venta de Bravo Fault and provide new parameters for further hazard assessment.

  4. Evaluating earthquake hazards in the Los Angeles region; an earth-science perspective

    USGS Publications Warehouse

    Ziony, Joseph I.

    1985-01-01

    Potentially destructive earthquakes are inevitable in the Los Angeles region of California, but hazards prediction can provide a basis for reducing damage and loss. This volume identifies the principal geologically controlled earthquake hazards of the region (surface faulting, strong shaking, ground failure, and tsunamis), summarizes methods for characterizing their extent and severity, and suggests opportunities for their reduction. Two systems of active faults generate earthquakes in the Los Angeles region: northwest-trending, chiefly horizontal-slip faults, such as the San Andreas, and west-trending, chiefly vertical-slip faults, such as those of the Transverse Ranges. Faults in these two systems have produced more than 40 damaging earthquakes since 1800. Ninety-five faults have slipped in late Quaternary time (approximately the past 750,000 yr) and are judged capable of generating future moderate to large earthquakes and displacing the ground surface. Average rates of late Quaternary slip or separation along these faults provide an index of their relative activity. The San Andreas and San Jacinto faults have slip rates measured in tens of millimeters per year, but most other faults have rates of about 1 mm/yr or less. Intermediate rates of as much as 6 mm/yr characterize a belt of Transverse Ranges faults that extends from near Santa Barbara to near San Bernardino. The dimensions of late Quaternary faults provide a basis for estimating the maximum sizes of likely future earthquakes in the Los Angeles region: moment magnitude .(M) 8 for the San Andreas, M 7 for the other northwest-trending elements of that fault system, and M 7.5 for the Transverse Ranges faults. Geologic and seismologic evidence along these faults, however, suggests that, for planning and designing noncritical facilities, appropriate sizes would be M 8 for the San Andreas, M 7 for the San Jacinto, M 6.5 for other northwest-trending faults, and M 6.5 to 7 for the Transverse Ranges faults. The geologic and seismologic record indicates that parts of the San Andreas and San Jacinto faults have generated major earthquakes having recurrence intervals of several tens to a few hundred years. In contrast, the geologic evidence at points along other active faults suggests recurrence intervals measured in many hundreds to several thousands of years. The distribution and character of late Quaternary surface faulting permit estimation of the likely location, style, and amount of future surface displacements. An extensive body of geologic and geotechnical information is used to evaluate areal differences in future levels of shaking. Bedrock and alluvial deposits are differentiated according to the physical properties that control shaking response; maps of these properties are prepared by analyzing existing geologic and soils maps, the geomorphology of surficial units, and. geotechnical data obtained from boreholes. The shear-wave velocities of near-surface geologic units must be estimated for some methods of evaluating shaking potential. Regional-scale maps of highly generalized shearwave velocity groups, based on the age and texture of exposed geologic units and on a simple two-dimensional model of Quaternary sediment distribution, provide a first approximation of the areal variability in shaking response. More accurate depictions of near-surface shear-wave velocity useful for predicting ground-motion parameters take into account the thickness of the Quaternary deposits, vertical variations in sediment .type, and the correlation of shear-wave velocity with standard penetration resistance of different sediments. A map of the upper Santa Ana River basin showing shear-wave velocities to depths equal to one-quarter wavelength of a 1-s shear wave demonstrates the three-dimensional mapping procedure. Four methods for predicting the distribution and strength of shaking from future earthquakes are presented. These techniques use different measures of strong-motion

  5. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  6. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the

  7. Planetary Gearbox Fault Detection Using Vibration Separation Techniques

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; LaBerge, Kelsen E.; Ehinger, Ryan T.; Fetty, Jason

    2011-01-01

    Studies were performed to demonstrate the capability to detect planetary gear and bearing faults in helicopter main-rotor transmissions. The work supported the Operations Support and Sustainment (OSST) program with the U.S. Army Aviation Applied Technology Directorate (AATD) and Bell Helicopter Textron. Vibration data from the OH-58C planetary system were collected on a healthy transmission as well as with various seeded-fault components. Planetary fault detection algorithms were used with the collected data to evaluate fault detection effectiveness. Planet gear tooth cracks and spalls were detectable using the vibration separation techniques. Sun gear tooth cracks were not discernibly detectable from the vibration separation process. Sun gear tooth spall defects were detectable. Ring gear tooth cracks were only clearly detectable by accelerometers located near the crack location or directly across from the crack. Enveloping provided an effective method for planet bearing inner- and outer-race spalling fault detection.

  8. Finite-frequency wave propagation through outer rise fault zones and seismic measurements of upper mantle hydration

    USGS Publications Warehouse

    Miller, Nathaniel; Lizarralde, Daniel

    2016-01-01

    Effects of serpentine-filled fault zones on seismic wave propagation in the upper mantle at the outer rise of subduction zones are evaluated using acoustic wave propagation models. Modeled wave speeds depend on azimuth, with slowest speeds in the fault-normal direction. Propagation is fastest along faults, but, for fault widths on the order of the seismic wavelength, apparent wave speeds in this direction depend on frequency. For the 5–12 Hz Pn arrivals used in tomographic studies, joint-parallel wavefronts are slowed by joints. This delay can account for the slowing seen in tomographic images of the outer rise upper mantle. At the Middle America Trench, confining serpentine to fault zones, as opposed to a uniform distribution, reduces estimates of bulk upper mantle hydration from ~3.5 wt % to as low as 0.33 wt % H2O.

  9. Fast Fourier and discrete wavelet transforms applied to sensorless vector control induction motor for rotor bar faults diagnosis.

    PubMed

    Talhaoui, Hicham; Menacer, Arezki; Kessal, Abdelhalim; Kechida, Ridha

    2014-09-01

    This paper presents new techniques to evaluate faults in case of broken rotor bars of induction motors. Procedures are applied with closed-loop control. Electrical and mechanical variables are treated using fast Fourier transform (FFT), and discrete wavelet transform (DWT) at start-up and steady state. The wavelet transform has proven to be an excellent mathematical tool for the detection of the faults particularly broken rotor bars type. As a performance, DWT can provide a local representation of the non-stationary current signals for the healthy machine and with fault. For sensorless control, a Luenberger observer is applied; the estimation rotor speed is analyzed; the effect of the faults in the speed pulsation is compensated; a quadratic current appears and used for fault detection. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. What does fault tolerant Deep Learning need from MPI?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for amore » fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.« less

  11. Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements.

    PubMed

    Westerteiger, R; Compton, T; Bernadin, T; Cowgill, E; Gwinner, K; Hamann, B; Gerndt, A; Hagen, H

    2012-12-01

    Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars.

  12. Toward uniform probabilistic seismic hazard assessments for Southeast Asia

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Shi, X.; Ornthammarath, T.; Warnitchai, P.; Kosuwan, S.; Thant, M.; Nguyen, P. H.; Nguyen, L. M.; Solidum, R., Jr.; Irsyam, M.; Hidayati, S.; Sieh, K.

    2017-12-01

    Although most Southeast Asian countries have seismic hazard maps, various methodologies and quality result in appreciable mismatches at national boundaries. We aim to conduct a uniform assessment across the region by through standardized earthquake and fault databases, ground-shaking scenarios, and regional hazard maps. Our earthquake database contains earthquake parameters obtained from global and national seismic networks, harmonized by removal of duplicate events and the use of moment magnitude. Our active-fault database includes fault parameters from previous studies and from the databases implemented for national seismic hazard maps. Another crucial input for seismic hazard assessment is proper evaluation of ground-shaking attenuation. Since few ground-motion prediction equations (GMPEs) have used local observations from this region, we evaluated attenuation by comparison of instrumental observations and felt intensities for recent earthquakes with predicted ground shaking from published GMPEs. We then utilize the best-fitting GMPEs and site conditions into our seismic hazard assessments. Based on the database and proper GMPEs, we have constructed regional probabilistic seismic hazard maps. The assessment shows highest seismic hazard levels near those faults with high slip rates, including the Sagaing Fault in central Myanmar, the Sumatran Fault in Sumatra, the Palu-Koro, Matano and Lawanopo Faults in Sulawesi, and the Philippine Fault across several islands of the Philippines. In addition, our assessment demonstrates the important fact that regions with low earthquake probability may well have a higher aggregate probability of future earthquakes, since they encompass much larger areas than the areas of high probability. The significant irony then is that in areas of low to moderate probability, where building codes are usually to provide less seismic resilience, seismic risk is likely to be greater. Infrastructural damage in East Malaysia during the 2015 Sabah earthquake offers a case in point.

  13. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  14. Style and rate of quaternary deformation of the Hosgri Fault Zone, offshore south-central coastal California

    USGS Publications Warehouse

    Hanson, Kathryn L.; Lettis, William R.; McLaren, Marcia; Savage, William U.; Hall, N. Timothy; Keller, Mararget A.

    2004-01-01

    The Hosgri Fault Zone is the southernmost component of a complex system of right-slip faults in south-central coastal California that includes the San Gregorio, Sur, and San Simeon Faults. We have characterized the contemporary style of faulting along the zone on the basis of an integrated analysis of a broad spectrum of data, including shallow high-resolution and deep penetration seismic reflection data; geologic and geomorphic data along the Hosgri and San Simeon Fault Zones and the intervening San Simeon/Hosgri pull-apart basin; the distribution and nature of near-coast seismicity; regional tectonic kinematics; and comparison of the Hosgri Fault Zone with worldwide strike-slip, oblique-slip, and reverse-slip fault zones. These data show that the modern Hosgri Fault Zone is a convergent right-slip (transpressional) fault having a late Quaternary slip rate of 1 to 3 mm/yr. Evidence supporting predominantly strike-slip deformation includes (1) a long, narrow, linear zone of faulting and associated deformation; (2) the presence of asymmetric flower structures; (3) kinematically consistent localized extensional and compressional deformation at releasing and restraining bends or steps, respectively, in the fault zone; (4) changes in the sense and magnitude of vertical separation both along trend of the fault zone and vertically within the fault zone; (5) strike-slip focal mechanisms along the fault trace; (6) a distribution of seismicity that delineates a high-angle fault extending through the seismogenic crust; (7) high ratios of lateral to vertical slip along the fault zone; and (8) the separation by the fault of two tectonic domains (offshore Santa Maria Basin, onshore Los Osos domain) that are undergoing contrasting styles of deformation and orientations of crustal shortening. The convergent component of slip is evidenced by the deformation of the early-late Pliocene unconformity. In characterizing the style of faulting along the Hosgri Fault Zone, we assessed alternative tectonic models by evaluating (1) the cumulative effects of multiple deformational episodes that can produce complex, difficult-to-interpret fault geometries, patterns, and senses of displacement; (2) the difficult imaging of high-angle fault planes and horizontal fault separations on seismic reflection data; and (3) the effects of strain partitioning that yield coeval strike-slip faults and associated fold and thrust belts.

  15. Influence of mineralogy and microstructures on strain localization and fault zone architecture of the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Ichiba, T.; Kaneki, S.; Hirono, T.; Oohashi, K.; Schuck, B.; Janssen, C.; Schleicher, A.; Toy, V.; Dresen, G.

    2017-12-01

    The Alpine Fault on New Zealand's South Island is an oblique, dextral strike-slip fault that accommodated the majority of displacement between the Pacific and the Australian Plates and presents the biggest seismic hazard in the region. Along its central segment, the hanging wall comprises greenschist and amphibolite facies Alpine Schists. Exhumation from 35 km depth, along a SE-dipping detachment, lead to mylonitization which was subsequently overprinted by brittle deformation and finally resulted in the fault's 1 km wide damage zone. The geomechanical behavior of a fault is affected by the internal structure of its fault zone. Consequently, studying processes controlling fault zone architecture allows assessing the seismic hazard of a fault. Here we present the results of a combined microstructural (SEM and TEM), mineralogical (XRD) and geochemical (XRF) investigation of outcrop samples originating from several locations along the Alpine Fault, the aim of which is to evaluate the influence of mineralogical composition, alteration and pre-existing fabric on strain localization and to identify the controls on the fault zone architecture, particularly the locus of brittle deformation in P, T and t space. Field observations reveal that the fault's principal slip zone (PSZ) is either a thin (< 1 cm to < 7 cm) layered structure or a relatively thick (10s cm) package lacking a detectable macroscopic fabric. Lithological and related rheological contrasts are widely assumed to govern strain localization. However, our preliminary results suggest that qualitative mineralogical composition has only minor impact on fault zone architecture. Quantities of individual mineral phases differ markedly between fault damage zone and fault core at specific sites, but the quantitative composition of identical structural units such as the fault core, is similar in all samples. This indicates that the degree of strain localization at the Alpine Fault might be controlled by small initial heterogeneities in texture and fabric or a combination of these, rather than in mineralogy. Further microstructural investigations are needed to test this hypothesis.

  16. Fault tolerant control based on interval type-2 fuzzy sliding mode controller for coaxial trirotor aircraft.

    PubMed

    Zeghlache, Samir; Kara, Kamel; Saigaa, Djamel

    2015-11-01

    In this paper, a robust controller for a Six Degrees of Freedom (6 DOF) coaxial trirotor helicopter control is proposed in presence of defects in the system. A control strategy based on the coupling of the interval type-2 fuzzy logic control and sliding mode control technique are used to design a controller. The main purpose of this work is to eliminate the chattering phenomenon and guaranteeing the stability and the robustness of the system. In order to achieve this goal, interval type-2 fuzzy logic control has been used to generate the discontinuous control signal. The simulation results have shown that the proposed control strategy can greatly alleviate the chattering effect, and perform good reference tracking in presence of defects in the system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. New Madrid seismotectonic study. Activities during fiscal year 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buschbach, T.C.

    1984-04-01

    The New Madrid Seismotectonic Study is a coordinated program of geological, geophysical, and seismological investigations of the area within a 200-mile radius of New Madrid, Missouri. The study is designed to define the structural setting and tectonic history of the area in order to realistically evaluate earthquake risks in the siting of nuclear facilities. Fiscal year 1982 included geological and geophysical studies aimed at better definition of the east-west trending fault systems - the Rough Creek and Cottage Grove systems - and the northwest-trending Ste. Genevieve faulting. A prime objective was to determine the nature and history of faulting andmore » to establish the relationship with that faulting and the northeast-trending faults of the Wabash Valley and New Madrid areas. 27 references, 61 figures.« less

  18. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  19. An expert system for the quantification of fault rates in construction fall accidents.

    PubMed

    Talat Birgonul, M; Dikmen, Irem; Budayan, Cenk; Demirel, Tuncay

    2016-01-01

    Expert witness reports, prepared with the aim of quantifying fault rates among parties, play an important role in a court's final decision. However, conflicting fault rates assigned by different expert witness boards lead to iterative objections raised by the related parties. This unfavorable situation mainly originates due to the subjectivity of expert judgments and unavailability of objective information about the causes of accidents. As a solution to this shortcoming, an expert system based on a rule-based system was developed for the quantification of fault rates in construction fall accidents. The aim of developing DsSafe is decreasing the subjectivity inherent in expert witness reports. Eighty-four inspection reports prepared by the official and authorized inspectors were examined and root causes of construction fall accidents in Turkey were identified. Using this information, an evaluation form was designed and submitted to the experts. Experts were asked to evaluate the importance level of the factors that govern fall accidents and determine the fault rates under different scenarios. Based on expert judgments, a rule-based expert system was developed. The accuracy and reliability of DsSafe were tested with real data as obtained from finalized court cases. DsSafe gives satisfactory results.

  20. Alternative model of thrust-fault propagation

    NASA Astrophysics Data System (ADS)

    Eisenstadt, Gloria; de Paor, Declan G.

    1987-07-01

    A widely accepted explanation for the geometry of thrust faults is that initial failures occur on deeply buried planes of weak rock and that thrust faults propagate toward the surface along a staircase trajectory. We propose an alternative model that applies Gretener's beam-failure mechanism to a multilayered sequence. Invoking compatibility conditions, which demand that a thrust propagate both upsection and downsection, we suggest that ramps form first, at shallow levels, and are subsequently connected by flat faults. This hypothesis also explains the formation of many minor structures associated with thrusts, such as backthrusts, wedge structures, pop-ups, and duplexes, and provides a unified conceptual framework in which to evaluate field observations.

Top